00:00:00.001 Started by upstream project "autotest-nightly" build number 4344 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3707 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.029 using credential 00000000-0000-0000-0000-000000000002 00:00:00.031 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.052 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.072 Using shallow fetch with depth 1 00:00:00.072 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.072 > git --version # timeout=10 00:00:00.103 > git --version # 'git version 2.39.2' 00:00:00.103 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.134 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.134 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.816 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.827 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.838 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.838 > git config core.sparsecheckout # timeout=10 00:00:04.848 > git read-tree -mu HEAD # timeout=10 00:00:04.861 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.890 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.891 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.985 [Pipeline] Start of Pipeline 00:00:04.997 [Pipeline] library 00:00:04.998 Loading library shm_lib@master 00:00:04.998 Library shm_lib@master is cached. Copying from home. 00:00:05.013 [Pipeline] node 00:00:05.027 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:05.028 [Pipeline] { 00:00:05.039 [Pipeline] catchError 00:00:05.041 [Pipeline] { 00:00:05.053 [Pipeline] wrap 00:00:05.063 [Pipeline] { 00:00:05.072 [Pipeline] stage 00:00:05.073 [Pipeline] { (Prologue) 00:00:05.093 [Pipeline] echo 00:00:05.095 Node: VM-host-WFP7 00:00:05.101 [Pipeline] cleanWs 00:00:05.111 [WS-CLEANUP] Deleting project workspace... 00:00:05.111 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.119 [WS-CLEANUP] done 00:00:05.304 [Pipeline] setCustomBuildProperty 00:00:05.389 [Pipeline] httpRequest 00:00:05.734 [Pipeline] echo 00:00:05.737 Sorcerer 10.211.164.20 is alive 00:00:05.747 [Pipeline] retry 00:00:05.749 [Pipeline] { 00:00:05.763 [Pipeline] httpRequest 00:00:05.768 HttpMethod: GET 00:00:05.768 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.769 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.770 Response Code: HTTP/1.1 200 OK 00:00:05.771 Success: Status code 200 is in the accepted range: 200,404 00:00:05.771 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.246 [Pipeline] } 00:00:06.262 [Pipeline] // retry 00:00:06.293 [Pipeline] sh 00:00:06.578 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.597 [Pipeline] httpRequest 00:00:06.966 [Pipeline] echo 00:00:06.968 Sorcerer 10.211.164.20 is alive 00:00:06.976 [Pipeline] retry 00:00:06.978 [Pipeline] { 00:00:06.989 [Pipeline] httpRequest 00:00:06.993 HttpMethod: GET 00:00:06.994 URL: http://10.211.164.20/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:06.994 Sending request to url: http://10.211.164.20/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:06.996 Response Code: HTTP/1.1 200 OK 00:00:06.997 Success: Status code 200 is in the accepted range: 200,404 00:00:06.997 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:31.548 [Pipeline] } 00:00:31.569 [Pipeline] // retry 00:00:31.577 [Pipeline] sh 00:00:31.861 + tar --no-same-owner -xf spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:34.407 [Pipeline] sh 00:00:34.692 + git -C spdk log --oneline -n5 00:00:34.692 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:00:34.692 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:00:34.692 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:00:34.692 0ea9ac02f accel/mlx5: Create pool of UMRs 00:00:34.692 60adca7e1 lib/mlx5: API to configure UMR 00:00:34.715 [Pipeline] writeFile 00:00:34.731 [Pipeline] sh 00:00:35.017 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:35.030 [Pipeline] sh 00:00:35.312 + cat autorun-spdk.conf 00:00:35.312 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.312 SPDK_RUN_ASAN=1 00:00:35.312 SPDK_RUN_UBSAN=1 00:00:35.312 SPDK_TEST_RAID=1 00:00:35.312 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:35.319 RUN_NIGHTLY=1 00:00:35.321 [Pipeline] } 00:00:35.335 [Pipeline] // stage 00:00:35.352 [Pipeline] stage 00:00:35.354 [Pipeline] { (Run VM) 00:00:35.367 [Pipeline] sh 00:00:35.651 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:35.651 + echo 'Start stage prepare_nvme.sh' 00:00:35.651 Start stage prepare_nvme.sh 00:00:35.651 + [[ -n 0 ]] 00:00:35.651 + disk_prefix=ex0 00:00:35.651 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:35.651 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:35.651 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:35.651 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.651 ++ SPDK_RUN_ASAN=1 00:00:35.651 ++ SPDK_RUN_UBSAN=1 00:00:35.651 ++ SPDK_TEST_RAID=1 00:00:35.651 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:35.651 ++ RUN_NIGHTLY=1 00:00:35.651 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:35.651 + nvme_files=() 00:00:35.651 + declare -A nvme_files 00:00:35.651 + backend_dir=/var/lib/libvirt/images/backends 00:00:35.651 + nvme_files['nvme.img']=5G 00:00:35.651 + nvme_files['nvme-cmb.img']=5G 00:00:35.651 + nvme_files['nvme-multi0.img']=4G 00:00:35.651 + nvme_files['nvme-multi1.img']=4G 00:00:35.651 + nvme_files['nvme-multi2.img']=4G 00:00:35.651 + nvme_files['nvme-openstack.img']=8G 00:00:35.651 + nvme_files['nvme-zns.img']=5G 00:00:35.651 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:35.651 + (( SPDK_TEST_FTL == 1 )) 00:00:35.651 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:35.651 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:35.651 + for nvme in "${!nvme_files[@]}" 00:00:35.651 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:35.651 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.651 + for nvme in "${!nvme_files[@]}" 00:00:35.651 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:35.651 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.651 + for nvme in "${!nvme_files[@]}" 00:00:35.651 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:35.651 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:35.651 + for nvme in "${!nvme_files[@]}" 00:00:35.651 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:35.651 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.651 + for nvme in "${!nvme_files[@]}" 00:00:35.651 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:35.651 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.651 + for nvme in "${!nvme_files[@]}" 00:00:35.651 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:35.651 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.651 + for nvme in "${!nvme_files[@]}" 00:00:35.651 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:35.911 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.911 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:35.911 + echo 'End stage prepare_nvme.sh' 00:00:35.911 End stage prepare_nvme.sh 00:00:35.923 [Pipeline] sh 00:00:36.206 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:36.207 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:00:36.207 00:00:36.207 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:36.207 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:36.207 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:36.207 HELP=0 00:00:36.207 DRY_RUN=0 00:00:36.207 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:36.207 NVME_DISKS_TYPE=nvme,nvme, 00:00:36.207 NVME_AUTO_CREATE=0 00:00:36.207 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:36.207 NVME_CMB=,, 00:00:36.207 NVME_PMR=,, 00:00:36.207 NVME_ZNS=,, 00:00:36.207 NVME_MS=,, 00:00:36.207 NVME_FDP=,, 00:00:36.207 SPDK_VAGRANT_DISTRO=fedora39 00:00:36.207 SPDK_VAGRANT_VMCPU=10 00:00:36.207 SPDK_VAGRANT_VMRAM=12288 00:00:36.207 SPDK_VAGRANT_PROVIDER=libvirt 00:00:36.207 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:36.207 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:36.207 SPDK_OPENSTACK_NETWORK=0 00:00:36.207 VAGRANT_PACKAGE_BOX=0 00:00:36.207 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:36.207 FORCE_DISTRO=true 00:00:36.207 VAGRANT_BOX_VERSION= 00:00:36.207 EXTRA_VAGRANTFILES= 00:00:36.207 NIC_MODEL=virtio 00:00:36.207 00:00:36.207 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:36.207 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:38.112 Bringing machine 'default' up with 'libvirt' provider... 00:00:38.681 ==> default: Creating image (snapshot of base box volume). 00:00:38.681 ==> default: Creating domain with the following settings... 00:00:38.681 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733591831_ae725391a787ab0fd132 00:00:38.681 ==> default: -- Domain type: kvm 00:00:38.681 ==> default: -- Cpus: 10 00:00:38.682 ==> default: -- Feature: acpi 00:00:38.682 ==> default: -- Feature: apic 00:00:38.682 ==> default: -- Feature: pae 00:00:38.682 ==> default: -- Memory: 12288M 00:00:38.682 ==> default: -- Memory Backing: hugepages: 00:00:38.682 ==> default: -- Management MAC: 00:00:38.682 ==> default: -- Loader: 00:00:38.682 ==> default: -- Nvram: 00:00:38.682 ==> default: -- Base box: spdk/fedora39 00:00:38.682 ==> default: -- Storage pool: default 00:00:38.682 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733591831_ae725391a787ab0fd132.img (20G) 00:00:38.682 ==> default: -- Volume Cache: default 00:00:38.682 ==> default: -- Kernel: 00:00:38.682 ==> default: -- Initrd: 00:00:38.682 ==> default: -- Graphics Type: vnc 00:00:38.682 ==> default: -- Graphics Port: -1 00:00:38.682 ==> default: -- Graphics IP: 127.0.0.1 00:00:38.682 ==> default: -- Graphics Password: Not defined 00:00:38.682 ==> default: -- Video Type: cirrus 00:00:38.682 ==> default: -- Video VRAM: 9216 00:00:38.682 ==> default: -- Sound Type: 00:00:38.682 ==> default: -- Keymap: en-us 00:00:38.682 ==> default: -- TPM Path: 00:00:38.682 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:38.682 ==> default: -- Command line args: 00:00:38.682 ==> default: -> value=-device, 00:00:38.682 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:38.682 ==> default: -> value=-drive, 00:00:38.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:38.682 ==> default: -> value=-device, 00:00:38.682 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.682 ==> default: -> value=-device, 00:00:38.682 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:38.682 ==> default: -> value=-drive, 00:00:38.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:38.682 ==> default: -> value=-device, 00:00:38.682 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.682 ==> default: -> value=-drive, 00:00:38.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:38.682 ==> default: -> value=-device, 00:00:38.682 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.682 ==> default: -> value=-drive, 00:00:38.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:38.682 ==> default: -> value=-device, 00:00:38.682 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.942 ==> default: Creating shared folders metadata... 00:00:38.942 ==> default: Starting domain. 00:00:40.361 ==> default: Waiting for domain to get an IP address... 00:00:58.457 ==> default: Waiting for SSH to become available... 00:00:59.834 ==> default: Configuring and enabling network interfaces... 00:01:06.408 default: SSH address: 192.168.121.200:22 00:01:06.408 default: SSH username: vagrant 00:01:06.408 default: SSH auth method: private key 00:01:08.952 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:17.084 ==> default: Mounting SSHFS shared folder... 00:01:19.627 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:19.627 ==> default: Checking Mount.. 00:01:21.016 ==> default: Folder Successfully Mounted! 00:01:21.017 ==> default: Running provisioner: file... 00:01:22.398 default: ~/.gitconfig => .gitconfig 00:01:22.658 00:01:22.658 SUCCESS! 00:01:22.658 00:01:22.658 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:22.658 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:22.658 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:22.658 00:01:22.668 [Pipeline] } 00:01:22.682 [Pipeline] // stage 00:01:22.691 [Pipeline] dir 00:01:22.691 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:22.693 [Pipeline] { 00:01:22.704 [Pipeline] catchError 00:01:22.705 [Pipeline] { 00:01:22.717 [Pipeline] sh 00:01:23.066 + vagrant ssh-config --host vagrant 00:01:23.066 + sed -ne /^Host/,$p 00:01:23.066 + tee ssh_conf 00:01:25.610 Host vagrant 00:01:25.610 HostName 192.168.121.200 00:01:25.610 User vagrant 00:01:25.610 Port 22 00:01:25.610 UserKnownHostsFile /dev/null 00:01:25.610 StrictHostKeyChecking no 00:01:25.610 PasswordAuthentication no 00:01:25.610 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:25.610 IdentitiesOnly yes 00:01:25.610 LogLevel FATAL 00:01:25.610 ForwardAgent yes 00:01:25.610 ForwardX11 yes 00:01:25.610 00:01:25.626 [Pipeline] withEnv 00:01:25.628 [Pipeline] { 00:01:25.642 [Pipeline] sh 00:01:25.929 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:25.929 source /etc/os-release 00:01:25.929 [[ -e /image.version ]] && img=$(< /image.version) 00:01:25.929 # Minimal, systemd-like check. 00:01:25.929 if [[ -e /.dockerenv ]]; then 00:01:25.929 # Clear garbage from the node's name: 00:01:25.929 # agt-er_autotest_547-896 -> autotest_547-896 00:01:25.929 # $HOSTNAME is the actual container id 00:01:25.929 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:25.929 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:25.929 # We can assume this is a mount from a host where container is running, 00:01:25.929 # so fetch its hostname to easily identify the target swarm worker. 00:01:25.929 container="$(< /etc/hostname) ($agent)" 00:01:25.929 else 00:01:25.929 # Fallback 00:01:25.929 container=$agent 00:01:25.929 fi 00:01:25.929 fi 00:01:25.929 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:25.929 00:01:26.203 [Pipeline] } 00:01:26.231 [Pipeline] // withEnv 00:01:26.240 [Pipeline] setCustomBuildProperty 00:01:26.256 [Pipeline] stage 00:01:26.258 [Pipeline] { (Tests) 00:01:26.274 [Pipeline] sh 00:01:26.561 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:26.837 [Pipeline] sh 00:01:27.123 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:27.401 [Pipeline] timeout 00:01:27.402 Timeout set to expire in 1 hr 30 min 00:01:27.404 [Pipeline] { 00:01:27.420 [Pipeline] sh 00:01:27.705 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:28.275 HEAD is now at a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:28.290 [Pipeline] sh 00:01:28.580 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:28.854 [Pipeline] sh 00:01:29.139 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:29.420 [Pipeline] sh 00:01:29.705 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:29.965 ++ readlink -f spdk_repo 00:01:29.965 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:29.965 + [[ -n /home/vagrant/spdk_repo ]] 00:01:29.965 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:29.965 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:29.965 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:29.965 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:29.965 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:29.965 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:29.965 + cd /home/vagrant/spdk_repo 00:01:29.965 + source /etc/os-release 00:01:29.965 ++ NAME='Fedora Linux' 00:01:29.965 ++ VERSION='39 (Cloud Edition)' 00:01:29.965 ++ ID=fedora 00:01:29.965 ++ VERSION_ID=39 00:01:29.965 ++ VERSION_CODENAME= 00:01:29.965 ++ PLATFORM_ID=platform:f39 00:01:29.965 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:29.965 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:29.965 ++ LOGO=fedora-logo-icon 00:01:29.965 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:29.965 ++ HOME_URL=https://fedoraproject.org/ 00:01:29.965 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:29.965 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:29.965 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:29.965 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:29.965 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:29.965 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:29.965 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:29.965 ++ SUPPORT_END=2024-11-12 00:01:29.965 ++ VARIANT='Cloud Edition' 00:01:29.965 ++ VARIANT_ID=cloud 00:01:29.965 + uname -a 00:01:29.965 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:29.965 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:30.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:30.535 Hugepages 00:01:30.535 node hugesize free / total 00:01:30.535 node0 1048576kB 0 / 0 00:01:30.535 node0 2048kB 0 / 0 00:01:30.535 00:01:30.535 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:30.535 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:30.535 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:30.535 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:30.535 + rm -f /tmp/spdk-ld-path 00:01:30.535 + source autorun-spdk.conf 00:01:30.535 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.535 ++ SPDK_RUN_ASAN=1 00:01:30.535 ++ SPDK_RUN_UBSAN=1 00:01:30.535 ++ SPDK_TEST_RAID=1 00:01:30.535 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.535 ++ RUN_NIGHTLY=1 00:01:30.535 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:30.535 + [[ -n '' ]] 00:01:30.535 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:30.535 + for M in /var/spdk/build-*-manifest.txt 00:01:30.535 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:30.535 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.535 + for M in /var/spdk/build-*-manifest.txt 00:01:30.535 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:30.535 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.535 + for M in /var/spdk/build-*-manifest.txt 00:01:30.535 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:30.535 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.535 ++ uname 00:01:30.535 + [[ Linux == \L\i\n\u\x ]] 00:01:30.535 + sudo dmesg -T 00:01:30.796 + sudo dmesg --clear 00:01:30.796 + dmesg_pid=5428 00:01:30.796 + sudo dmesg -Tw 00:01:30.796 + [[ Fedora Linux == FreeBSD ]] 00:01:30.796 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.796 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.796 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:30.796 + [[ -x /usr/src/fio-static/fio ]] 00:01:30.796 + export FIO_BIN=/usr/src/fio-static/fio 00:01:30.796 + FIO_BIN=/usr/src/fio-static/fio 00:01:30.796 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:30.796 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:30.796 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:30.796 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.796 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.796 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:30.796 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.796 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.796 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:30.796 17:18:04 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:30.796 17:18:04 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:30.796 17:18:04 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.796 17:18:04 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:30.796 17:18:04 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:30.796 17:18:04 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:30.796 17:18:04 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.796 17:18:04 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:30.796 17:18:04 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:30.796 17:18:04 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.057 17:18:04 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:31.057 17:18:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:31.057 17:18:04 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:31.057 17:18:04 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.057 17:18:04 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.057 17:18:04 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.057 17:18:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.057 17:18:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.057 17:18:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.057 17:18:04 -- paths/export.sh@5 -- $ export PATH 00:01:31.057 17:18:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.057 17:18:04 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:31.057 17:18:04 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:31.057 17:18:04 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733591884.XXXXXX 00:01:31.057 17:18:04 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733591884.BeWQZV 00:01:31.057 17:18:04 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:31.057 17:18:04 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:31.057 17:18:04 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:31.057 17:18:04 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:31.057 17:18:04 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:31.057 17:18:04 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:31.057 17:18:04 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:31.057 17:18:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.057 17:18:04 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:31.057 17:18:04 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:31.057 17:18:04 -- pm/common@17 -- $ local monitor 00:01:31.057 17:18:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.057 17:18:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.057 17:18:04 -- pm/common@21 -- $ date +%s 00:01:31.057 17:18:04 -- pm/common@25 -- $ sleep 1 00:01:31.057 17:18:04 -- pm/common@21 -- $ date +%s 00:01:31.057 17:18:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733591884 00:01:31.057 17:18:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733591884 00:01:31.057 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733591884_collect-cpu-load.pm.log 00:01:31.057 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733591884_collect-vmstat.pm.log 00:01:31.998 17:18:05 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:31.998 17:18:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:31.998 17:18:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:31.998 17:18:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:31.998 17:18:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:31.998 Sat Dec 7 05:18:05 PM UTC 2024 00:01:31.998 17:18:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:31.998 v25.01-pre-311-ga2f5e1c2d 00:01:31.998 17:18:05 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:31.998 17:18:05 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:31.998 17:18:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:31.998 17:18:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:31.998 17:18:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.998 ************************************ 00:01:31.998 START TEST asan 00:01:31.998 ************************************ 00:01:31.998 using asan 00:01:31.998 17:18:05 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:31.998 00:01:31.998 real 0m0.001s 00:01:31.998 user 0m0.000s 00:01:31.998 sys 0m0.001s 00:01:31.998 17:18:05 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:31.998 17:18:05 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:31.998 ************************************ 00:01:31.998 END TEST asan 00:01:31.998 ************************************ 00:01:31.998 17:18:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:31.998 17:18:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:31.998 17:18:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:31.998 17:18:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:31.998 17:18:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.998 ************************************ 00:01:31.998 START TEST ubsan 00:01:31.998 ************************************ 00:01:31.998 using ubsan 00:01:31.998 17:18:05 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:31.998 00:01:31.998 real 0m0.000s 00:01:31.998 user 0m0.000s 00:01:31.998 sys 0m0.000s 00:01:31.998 17:18:05 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:31.998 17:18:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:31.998 ************************************ 00:01:31.998 END TEST ubsan 00:01:31.998 ************************************ 00:01:32.256 17:18:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:32.256 17:18:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:32.256 17:18:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:32.256 17:18:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:32.256 17:18:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:32.256 17:18:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:32.256 17:18:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:32.256 17:18:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:32.256 17:18:05 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:32.256 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:32.256 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:32.824 Using 'verbs' RDMA provider 00:01:48.641 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:03.553 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:04.494 Creating mk/config.mk...done. 00:02:04.494 Creating mk/cc.flags.mk...done. 00:02:04.494 Type 'make' to build. 00:02:04.494 17:18:37 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:04.494 17:18:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.494 17:18:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.495 17:18:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.495 ************************************ 00:02:04.495 START TEST make 00:02:04.495 ************************************ 00:02:04.495 17:18:37 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:04.755 make[1]: Nothing to be done for 'all'. 00:02:14.719 The Meson build system 00:02:14.719 Version: 1.5.0 00:02:14.719 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:14.719 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:14.719 Build type: native build 00:02:14.719 Program cat found: YES (/usr/bin/cat) 00:02:14.719 Project name: DPDK 00:02:14.719 Project version: 24.03.0 00:02:14.719 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.719 C linker for the host machine: cc ld.bfd 2.40-14 00:02:14.719 Host machine cpu family: x86_64 00:02:14.719 Host machine cpu: x86_64 00:02:14.719 Message: ## Building in Developer Mode ## 00:02:14.719 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.719 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:14.719 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.719 Program python3 found: YES (/usr/bin/python3) 00:02:14.719 Program cat found: YES (/usr/bin/cat) 00:02:14.719 Compiler for C supports arguments -march=native: YES 00:02:14.719 Checking for size of "void *" : 8 00:02:14.719 Checking for size of "void *" : 8 (cached) 00:02:14.719 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:14.719 Library m found: YES 00:02:14.719 Library numa found: YES 00:02:14.719 Has header "numaif.h" : YES 00:02:14.719 Library fdt found: NO 00:02:14.719 Library execinfo found: NO 00:02:14.719 Has header "execinfo.h" : YES 00:02:14.719 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.719 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.719 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.719 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.719 Run-time dependency openssl found: YES 3.1.1 00:02:14.719 Run-time dependency libpcap found: YES 1.10.4 00:02:14.719 Has header "pcap.h" with dependency libpcap: YES 00:02:14.719 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.719 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.719 Compiler for C supports arguments -Wformat: YES 00:02:14.719 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.719 Compiler for C supports arguments -Wformat-security: NO 00:02:14.719 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.719 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.719 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.719 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.719 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.719 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.719 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.719 Compiler for C supports arguments -Wundef: YES 00:02:14.719 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.719 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.719 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.719 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.719 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.719 Program objdump found: YES (/usr/bin/objdump) 00:02:14.719 Compiler for C supports arguments -mavx512f: YES 00:02:14.719 Checking if "AVX512 checking" compiles: YES 00:02:14.719 Fetching value of define "__SSE4_2__" : 1 00:02:14.719 Fetching value of define "__AES__" : 1 00:02:14.719 Fetching value of define "__AVX__" : 1 00:02:14.719 Fetching value of define "__AVX2__" : 1 00:02:14.719 Fetching value of define "__AVX512BW__" : 1 00:02:14.719 Fetching value of define "__AVX512CD__" : 1 00:02:14.719 Fetching value of define "__AVX512DQ__" : 1 00:02:14.719 Fetching value of define "__AVX512F__" : 1 00:02:14.719 Fetching value of define "__AVX512VL__" : 1 00:02:14.719 Fetching value of define "__PCLMUL__" : 1 00:02:14.719 Fetching value of define "__RDRND__" : 1 00:02:14.719 Fetching value of define "__RDSEED__" : 1 00:02:14.719 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.719 Fetching value of define "__znver1__" : (undefined) 00:02:14.719 Fetching value of define "__znver2__" : (undefined) 00:02:14.719 Fetching value of define "__znver3__" : (undefined) 00:02:14.719 Fetching value of define "__znver4__" : (undefined) 00:02:14.719 Library asan found: YES 00:02:14.719 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.719 Message: lib/log: Defining dependency "log" 00:02:14.719 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.719 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.719 Library rt found: YES 00:02:14.719 Checking for function "getentropy" : NO 00:02:14.719 Message: lib/eal: Defining dependency "eal" 00:02:14.719 Message: lib/ring: Defining dependency "ring" 00:02:14.719 Message: lib/rcu: Defining dependency "rcu" 00:02:14.719 Message: lib/mempool: Defining dependency "mempool" 00:02:14.719 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.719 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.719 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.719 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.719 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.719 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.719 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:14.719 Compiler for C supports arguments -mpclmul: YES 00:02:14.719 Compiler for C supports arguments -maes: YES 00:02:14.719 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.719 Compiler for C supports arguments -mavx512bw: YES 00:02:14.719 Compiler for C supports arguments -mavx512dq: YES 00:02:14.719 Compiler for C supports arguments -mavx512vl: YES 00:02:14.719 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.719 Compiler for C supports arguments -mavx2: YES 00:02:14.719 Compiler for C supports arguments -mavx: YES 00:02:14.719 Message: lib/net: Defining dependency "net" 00:02:14.719 Message: lib/meter: Defining dependency "meter" 00:02:14.719 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.719 Message: lib/pci: Defining dependency "pci" 00:02:14.719 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.719 Message: lib/hash: Defining dependency "hash" 00:02:14.719 Message: lib/timer: Defining dependency "timer" 00:02:14.719 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.719 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.719 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.719 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.719 Message: lib/power: Defining dependency "power" 00:02:14.719 Message: lib/reorder: Defining dependency "reorder" 00:02:14.719 Message: lib/security: Defining dependency "security" 00:02:14.719 Has header "linux/userfaultfd.h" : YES 00:02:14.719 Has header "linux/vduse.h" : YES 00:02:14.719 Message: lib/vhost: Defining dependency "vhost" 00:02:14.719 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.719 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.719 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.719 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.719 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:14.719 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:14.719 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:14.719 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:14.719 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:14.719 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:14.719 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.719 Configuring doxy-api-html.conf using configuration 00:02:14.719 Configuring doxy-api-man.conf using configuration 00:02:14.719 Program mandb found: YES (/usr/bin/mandb) 00:02:14.720 Program sphinx-build found: NO 00:02:14.720 Configuring rte_build_config.h using configuration 00:02:14.720 Message: 00:02:14.720 ================= 00:02:14.720 Applications Enabled 00:02:14.720 ================= 00:02:14.720 00:02:14.720 apps: 00:02:14.720 00:02:14.720 00:02:14.720 Message: 00:02:14.720 ================= 00:02:14.720 Libraries Enabled 00:02:14.720 ================= 00:02:14.720 00:02:14.720 libs: 00:02:14.720 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:14.720 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:14.720 cryptodev, dmadev, power, reorder, security, vhost, 00:02:14.720 00:02:14.720 Message: 00:02:14.720 =============== 00:02:14.720 Drivers Enabled 00:02:14.720 =============== 00:02:14.720 00:02:14.720 common: 00:02:14.720 00:02:14.720 bus: 00:02:14.720 pci, vdev, 00:02:14.720 mempool: 00:02:14.720 ring, 00:02:14.720 dma: 00:02:14.720 00:02:14.720 net: 00:02:14.720 00:02:14.720 crypto: 00:02:14.720 00:02:14.720 compress: 00:02:14.720 00:02:14.720 vdpa: 00:02:14.720 00:02:14.720 00:02:14.720 Message: 00:02:14.720 ================= 00:02:14.720 Content Skipped 00:02:14.720 ================= 00:02:14.720 00:02:14.720 apps: 00:02:14.720 dumpcap: explicitly disabled via build config 00:02:14.720 graph: explicitly disabled via build config 00:02:14.720 pdump: explicitly disabled via build config 00:02:14.720 proc-info: explicitly disabled via build config 00:02:14.720 test-acl: explicitly disabled via build config 00:02:14.720 test-bbdev: explicitly disabled via build config 00:02:14.720 test-cmdline: explicitly disabled via build config 00:02:14.720 test-compress-perf: explicitly disabled via build config 00:02:14.720 test-crypto-perf: explicitly disabled via build config 00:02:14.720 test-dma-perf: explicitly disabled via build config 00:02:14.720 test-eventdev: explicitly disabled via build config 00:02:14.720 test-fib: explicitly disabled via build config 00:02:14.720 test-flow-perf: explicitly disabled via build config 00:02:14.720 test-gpudev: explicitly disabled via build config 00:02:14.720 test-mldev: explicitly disabled via build config 00:02:14.720 test-pipeline: explicitly disabled via build config 00:02:14.720 test-pmd: explicitly disabled via build config 00:02:14.720 test-regex: explicitly disabled via build config 00:02:14.720 test-sad: explicitly disabled via build config 00:02:14.720 test-security-perf: explicitly disabled via build config 00:02:14.720 00:02:14.720 libs: 00:02:14.720 argparse: explicitly disabled via build config 00:02:14.720 metrics: explicitly disabled via build config 00:02:14.720 acl: explicitly disabled via build config 00:02:14.720 bbdev: explicitly disabled via build config 00:02:14.720 bitratestats: explicitly disabled via build config 00:02:14.720 bpf: explicitly disabled via build config 00:02:14.720 cfgfile: explicitly disabled via build config 00:02:14.720 distributor: explicitly disabled via build config 00:02:14.720 efd: explicitly disabled via build config 00:02:14.720 eventdev: explicitly disabled via build config 00:02:14.720 dispatcher: explicitly disabled via build config 00:02:14.720 gpudev: explicitly disabled via build config 00:02:14.720 gro: explicitly disabled via build config 00:02:14.720 gso: explicitly disabled via build config 00:02:14.720 ip_frag: explicitly disabled via build config 00:02:14.720 jobstats: explicitly disabled via build config 00:02:14.720 latencystats: explicitly disabled via build config 00:02:14.720 lpm: explicitly disabled via build config 00:02:14.720 member: explicitly disabled via build config 00:02:14.720 pcapng: explicitly disabled via build config 00:02:14.720 rawdev: explicitly disabled via build config 00:02:14.720 regexdev: explicitly disabled via build config 00:02:14.720 mldev: explicitly disabled via build config 00:02:14.720 rib: explicitly disabled via build config 00:02:14.720 sched: explicitly disabled via build config 00:02:14.720 stack: explicitly disabled via build config 00:02:14.720 ipsec: explicitly disabled via build config 00:02:14.720 pdcp: explicitly disabled via build config 00:02:14.720 fib: explicitly disabled via build config 00:02:14.720 port: explicitly disabled via build config 00:02:14.720 pdump: explicitly disabled via build config 00:02:14.720 table: explicitly disabled via build config 00:02:14.720 pipeline: explicitly disabled via build config 00:02:14.720 graph: explicitly disabled via build config 00:02:14.720 node: explicitly disabled via build config 00:02:14.720 00:02:14.720 drivers: 00:02:14.720 common/cpt: not in enabled drivers build config 00:02:14.720 common/dpaax: not in enabled drivers build config 00:02:14.720 common/iavf: not in enabled drivers build config 00:02:14.720 common/idpf: not in enabled drivers build config 00:02:14.720 common/ionic: not in enabled drivers build config 00:02:14.720 common/mvep: not in enabled drivers build config 00:02:14.720 common/octeontx: not in enabled drivers build config 00:02:14.720 bus/auxiliary: not in enabled drivers build config 00:02:14.720 bus/cdx: not in enabled drivers build config 00:02:14.720 bus/dpaa: not in enabled drivers build config 00:02:14.720 bus/fslmc: not in enabled drivers build config 00:02:14.720 bus/ifpga: not in enabled drivers build config 00:02:14.720 bus/platform: not in enabled drivers build config 00:02:14.720 bus/uacce: not in enabled drivers build config 00:02:14.720 bus/vmbus: not in enabled drivers build config 00:02:14.720 common/cnxk: not in enabled drivers build config 00:02:14.720 common/mlx5: not in enabled drivers build config 00:02:14.720 common/nfp: not in enabled drivers build config 00:02:14.720 common/nitrox: not in enabled drivers build config 00:02:14.720 common/qat: not in enabled drivers build config 00:02:14.720 common/sfc_efx: not in enabled drivers build config 00:02:14.720 mempool/bucket: not in enabled drivers build config 00:02:14.720 mempool/cnxk: not in enabled drivers build config 00:02:14.720 mempool/dpaa: not in enabled drivers build config 00:02:14.720 mempool/dpaa2: not in enabled drivers build config 00:02:14.720 mempool/octeontx: not in enabled drivers build config 00:02:14.720 mempool/stack: not in enabled drivers build config 00:02:14.720 dma/cnxk: not in enabled drivers build config 00:02:14.720 dma/dpaa: not in enabled drivers build config 00:02:14.720 dma/dpaa2: not in enabled drivers build config 00:02:14.720 dma/hisilicon: not in enabled drivers build config 00:02:14.720 dma/idxd: not in enabled drivers build config 00:02:14.720 dma/ioat: not in enabled drivers build config 00:02:14.720 dma/skeleton: not in enabled drivers build config 00:02:14.720 net/af_packet: not in enabled drivers build config 00:02:14.720 net/af_xdp: not in enabled drivers build config 00:02:14.720 net/ark: not in enabled drivers build config 00:02:14.720 net/atlantic: not in enabled drivers build config 00:02:14.720 net/avp: not in enabled drivers build config 00:02:14.720 net/axgbe: not in enabled drivers build config 00:02:14.720 net/bnx2x: not in enabled drivers build config 00:02:14.720 net/bnxt: not in enabled drivers build config 00:02:14.720 net/bonding: not in enabled drivers build config 00:02:14.720 net/cnxk: not in enabled drivers build config 00:02:14.720 net/cpfl: not in enabled drivers build config 00:02:14.720 net/cxgbe: not in enabled drivers build config 00:02:14.720 net/dpaa: not in enabled drivers build config 00:02:14.720 net/dpaa2: not in enabled drivers build config 00:02:14.720 net/e1000: not in enabled drivers build config 00:02:14.720 net/ena: not in enabled drivers build config 00:02:14.720 net/enetc: not in enabled drivers build config 00:02:14.720 net/enetfec: not in enabled drivers build config 00:02:14.720 net/enic: not in enabled drivers build config 00:02:14.720 net/failsafe: not in enabled drivers build config 00:02:14.720 net/fm10k: not in enabled drivers build config 00:02:14.720 net/gve: not in enabled drivers build config 00:02:14.720 net/hinic: not in enabled drivers build config 00:02:14.720 net/hns3: not in enabled drivers build config 00:02:14.720 net/i40e: not in enabled drivers build config 00:02:14.720 net/iavf: not in enabled drivers build config 00:02:14.720 net/ice: not in enabled drivers build config 00:02:14.720 net/idpf: not in enabled drivers build config 00:02:14.720 net/igc: not in enabled drivers build config 00:02:14.720 net/ionic: not in enabled drivers build config 00:02:14.720 net/ipn3ke: not in enabled drivers build config 00:02:14.720 net/ixgbe: not in enabled drivers build config 00:02:14.720 net/mana: not in enabled drivers build config 00:02:14.720 net/memif: not in enabled drivers build config 00:02:14.720 net/mlx4: not in enabled drivers build config 00:02:14.720 net/mlx5: not in enabled drivers build config 00:02:14.720 net/mvneta: not in enabled drivers build config 00:02:14.720 net/mvpp2: not in enabled drivers build config 00:02:14.720 net/netvsc: not in enabled drivers build config 00:02:14.720 net/nfb: not in enabled drivers build config 00:02:14.720 net/nfp: not in enabled drivers build config 00:02:14.720 net/ngbe: not in enabled drivers build config 00:02:14.720 net/null: not in enabled drivers build config 00:02:14.720 net/octeontx: not in enabled drivers build config 00:02:14.720 net/octeon_ep: not in enabled drivers build config 00:02:14.720 net/pcap: not in enabled drivers build config 00:02:14.720 net/pfe: not in enabled drivers build config 00:02:14.720 net/qede: not in enabled drivers build config 00:02:14.720 net/ring: not in enabled drivers build config 00:02:14.720 net/sfc: not in enabled drivers build config 00:02:14.720 net/softnic: not in enabled drivers build config 00:02:14.720 net/tap: not in enabled drivers build config 00:02:14.720 net/thunderx: not in enabled drivers build config 00:02:14.720 net/txgbe: not in enabled drivers build config 00:02:14.720 net/vdev_netvsc: not in enabled drivers build config 00:02:14.720 net/vhost: not in enabled drivers build config 00:02:14.720 net/virtio: not in enabled drivers build config 00:02:14.720 net/vmxnet3: not in enabled drivers build config 00:02:14.720 raw/*: missing internal dependency, "rawdev" 00:02:14.720 crypto/armv8: not in enabled drivers build config 00:02:14.720 crypto/bcmfs: not in enabled drivers build config 00:02:14.720 crypto/caam_jr: not in enabled drivers build config 00:02:14.720 crypto/ccp: not in enabled drivers build config 00:02:14.720 crypto/cnxk: not in enabled drivers build config 00:02:14.720 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.720 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.720 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.720 crypto/mlx5: not in enabled drivers build config 00:02:14.720 crypto/mvsam: not in enabled drivers build config 00:02:14.720 crypto/nitrox: not in enabled drivers build config 00:02:14.720 crypto/null: not in enabled drivers build config 00:02:14.720 crypto/octeontx: not in enabled drivers build config 00:02:14.720 crypto/openssl: not in enabled drivers build config 00:02:14.720 crypto/scheduler: not in enabled drivers build config 00:02:14.720 crypto/uadk: not in enabled drivers build config 00:02:14.720 crypto/virtio: not in enabled drivers build config 00:02:14.720 compress/isal: not in enabled drivers build config 00:02:14.720 compress/mlx5: not in enabled drivers build config 00:02:14.720 compress/nitrox: not in enabled drivers build config 00:02:14.720 compress/octeontx: not in enabled drivers build config 00:02:14.720 compress/zlib: not in enabled drivers build config 00:02:14.720 regex/*: missing internal dependency, "regexdev" 00:02:14.720 ml/*: missing internal dependency, "mldev" 00:02:14.720 vdpa/ifc: not in enabled drivers build config 00:02:14.720 vdpa/mlx5: not in enabled drivers build config 00:02:14.720 vdpa/nfp: not in enabled drivers build config 00:02:14.720 vdpa/sfc: not in enabled drivers build config 00:02:14.720 event/*: missing internal dependency, "eventdev" 00:02:14.720 baseband/*: missing internal dependency, "bbdev" 00:02:14.720 gpu/*: missing internal dependency, "gpudev" 00:02:14.720 00:02:14.720 00:02:15.287 Build targets in project: 85 00:02:15.287 00:02:15.287 DPDK 24.03.0 00:02:15.287 00:02:15.287 User defined options 00:02:15.287 buildtype : debug 00:02:15.287 default_library : shared 00:02:15.287 libdir : lib 00:02:15.287 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:15.287 b_sanitize : address 00:02:15.287 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:15.287 c_link_args : 00:02:15.287 cpu_instruction_set: native 00:02:15.287 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:15.287 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:15.287 enable_docs : false 00:02:15.287 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:15.287 enable_kmods : false 00:02:15.287 max_lcores : 128 00:02:15.287 tests : false 00:02:15.287 00:02:15.287 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.546 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:15.805 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.805 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.805 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.805 [4/268] Linking static target lib/librte_kvargs.a 00:02:15.805 [5/268] Linking static target lib/librte_log.a 00:02:15.805 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:16.370 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:16.370 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.370 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:16.370 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.370 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:16.370 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:16.370 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:16.370 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:16.370 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:16.370 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:16.370 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:16.370 [18/268] Linking static target lib/librte_telemetry.a 00:02:16.629 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.887 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.887 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.887 [22/268] Linking target lib/librte_log.so.24.1 00:02:16.887 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.887 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:16.887 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:16.887 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.146 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:17.146 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:17.146 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:17.146 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:17.146 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.403 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:17.403 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.403 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:17.403 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:17.403 [36/268] Linking target lib/librte_telemetry.so.24.1 00:02:17.403 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.403 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:17.660 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.660 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:17.660 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:17.660 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.660 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:17.660 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:17.918 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:17.918 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:17.918 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.177 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.177 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:18.177 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.177 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.177 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.436 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.436 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.436 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.436 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.695 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.695 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.695 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.695 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:18.695 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.955 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.955 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:18.955 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.955 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.955 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:19.214 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:19.214 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:19.214 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.474 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:19.474 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:19.474 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.474 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:19.474 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:19.474 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:19.474 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:19.734 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.734 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.734 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.734 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.993 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.993 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.993 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:19.993 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.993 [85/268] Linking static target lib/librte_eal.a 00:02:20.252 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:20.252 [87/268] Linking static target lib/librte_ring.a 00:02:20.252 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:20.252 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:20.510 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:20.510 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:20.510 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:20.510 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.510 [94/268] Linking static target lib/librte_rcu.a 00:02:20.769 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.770 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:20.770 [97/268] Linking static target lib/librte_mempool.a 00:02:21.028 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:21.028 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.028 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:21.028 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:21.028 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:21.028 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:21.028 [104/268] Linking static target lib/librte_mbuf.a 00:02:21.028 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:21.028 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:21.028 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.286 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.286 [109/268] Linking static target lib/librte_net.a 00:02:21.544 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:21.544 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:21.544 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:21.803 [113/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:21.803 [114/268] Linking static target lib/librte_meter.a 00:02:21.803 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.803 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:21.803 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.062 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.062 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.062 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.320 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.320 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:22.580 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.580 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:22.838 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.838 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.838 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:22.838 [128/268] Linking static target lib/librte_pci.a 00:02:22.838 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:22.838 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:22.838 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:23.241 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:23.241 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:23.241 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:23.241 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:23.241 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:23.241 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:23.241 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:23.241 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:23.241 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.241 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:23.241 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:23.241 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:23.241 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:23.241 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:23.530 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:23.530 [147/268] Linking static target lib/librte_cmdline.a 00:02:23.809 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:23.809 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:23.809 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:23.809 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:23.809 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.067 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.067 [154/268] Linking static target lib/librte_timer.a 00:02:24.067 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.067 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.324 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.324 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.324 [159/268] Linking static target lib/librte_compressdev.a 00:02:24.581 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.581 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.839 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.839 [163/268] Linking static target lib/librte_dmadev.a 00:02:24.839 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.839 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.839 [166/268] Linking static target lib/librte_ethdev.a 00:02:24.839 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.839 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.097 [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.097 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:25.097 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:25.355 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.612 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:25.612 [174/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.612 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.612 [176/268] Linking static target lib/librte_hash.a 00:02:25.612 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.612 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.612 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.612 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.612 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.612 [182/268] Linking static target lib/librte_cryptodev.a 00:02:25.612 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:26.179 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:26.179 [185/268] Linking static target lib/librte_power.a 00:02:26.179 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:26.179 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:26.179 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:26.437 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:26.437 [190/268] Linking static target lib/librte_reorder.a 00:02:26.437 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:26.437 [192/268] Linking static target lib/librte_security.a 00:02:26.695 [193/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.953 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.211 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:27.211 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.212 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.469 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.469 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:27.469 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.727 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.728 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:27.728 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:27.985 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:27.985 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:27.985 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.243 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:28.243 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:28.243 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:28.243 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:28.243 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:28.501 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:28.501 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:28.501 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.501 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.501 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.760 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.760 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:28.760 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:28.760 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.760 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:29.018 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:29.018 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.018 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.018 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.018 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:29.276 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.212 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:31.149 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.149 [230/268] Linking target lib/librte_eal.so.24.1 00:02:31.149 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:31.408 [232/268] Linking target lib/librte_ring.so.24.1 00:02:31.408 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:31.409 [234/268] Linking target lib/librte_meter.so.24.1 00:02:31.409 [235/268] Linking target lib/librte_timer.so.24.1 00:02:31.409 [236/268] Linking target lib/librte_pci.so.24.1 00:02:31.409 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:31.409 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:31.409 [239/268] Linking target lib/librte_rcu.so.24.1 00:02:31.409 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:31.409 [241/268] Linking target lib/librte_mempool.so.24.1 00:02:31.409 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:31.409 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:31.409 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:31.409 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:31.667 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:31.667 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:31.667 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:31.667 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:31.933 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:31.933 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:02:31.933 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:31.933 [253/268] Linking target lib/librte_net.so.24.1 00:02:31.933 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:31.933 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:31.933 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:31.933 [257/268] Linking target lib/librte_security.so.24.1 00:02:31.933 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:32.197 [259/268] Linking target lib/librte_hash.so.24.1 00:02:32.198 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:33.586 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.843 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:33.843 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:34.101 [264/268] Linking target lib/librte_power.so.24.1 00:02:34.360 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:34.360 [266/268] Linking static target lib/librte_vhost.a 00:02:36.887 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.146 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:37.146 INFO: autodetecting backend as ninja 00:02:37.146 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:55.311 CC lib/log/log_flags.o 00:02:55.311 CC lib/log/log_deprecated.o 00:02:55.311 CC lib/log/log.o 00:02:55.311 CC lib/ut_mock/mock.o 00:02:55.311 CC lib/ut/ut.o 00:02:55.311 LIB libspdk_log.a 00:02:55.311 LIB libspdk_ut_mock.a 00:02:55.311 SO libspdk_log.so.7.1 00:02:55.311 LIB libspdk_ut.a 00:02:55.311 SO libspdk_ut_mock.so.6.0 00:02:55.311 SO libspdk_ut.so.2.0 00:02:55.311 SYMLINK libspdk_log.so 00:02:55.311 SYMLINK libspdk_ut_mock.so 00:02:55.311 SYMLINK libspdk_ut.so 00:02:55.311 CXX lib/trace_parser/trace.o 00:02:55.311 CC lib/dma/dma.o 00:02:55.311 CC lib/util/base64.o 00:02:55.311 CC lib/util/crc32.o 00:02:55.311 CC lib/util/bit_array.o 00:02:55.311 CC lib/util/crc32c.o 00:02:55.311 CC lib/util/cpuset.o 00:02:55.311 CC lib/util/crc16.o 00:02:55.311 CC lib/ioat/ioat.o 00:02:55.311 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.569 CC lib/util/crc32_ieee.o 00:02:55.569 CC lib/util/crc64.o 00:02:55.569 CC lib/vfio_user/host/vfio_user.o 00:02:55.570 CC lib/util/dif.o 00:02:55.570 CC lib/util/fd.o 00:02:55.570 LIB libspdk_dma.a 00:02:55.570 CC lib/util/fd_group.o 00:02:55.570 SO libspdk_dma.so.5.0 00:02:55.570 CC lib/util/file.o 00:02:55.570 CC lib/util/hexlify.o 00:02:55.570 SYMLINK libspdk_dma.so 00:02:55.570 CC lib/util/iov.o 00:02:55.570 LIB libspdk_ioat.a 00:02:55.570 CC lib/util/math.o 00:02:55.570 CC lib/util/net.o 00:02:55.570 LIB libspdk_vfio_user.a 00:02:55.570 SO libspdk_ioat.so.7.0 00:02:55.570 SO libspdk_vfio_user.so.5.0 00:02:55.828 SYMLINK libspdk_ioat.so 00:02:55.828 CC lib/util/pipe.o 00:02:55.828 CC lib/util/strerror_tls.o 00:02:55.828 CC lib/util/string.o 00:02:55.828 SYMLINK libspdk_vfio_user.so 00:02:55.828 CC lib/util/uuid.o 00:02:55.828 CC lib/util/xor.o 00:02:55.828 CC lib/util/zipf.o 00:02:55.828 CC lib/util/md5.o 00:02:56.085 LIB libspdk_util.a 00:02:56.343 LIB libspdk_trace_parser.a 00:02:56.343 SO libspdk_util.so.10.1 00:02:56.343 SO libspdk_trace_parser.so.6.0 00:02:56.343 SYMLINK libspdk_trace_parser.so 00:02:56.343 SYMLINK libspdk_util.so 00:02:56.601 CC lib/json/json_parse.o 00:02:56.601 CC lib/json/json_util.o 00:02:56.601 CC lib/json/json_write.o 00:02:56.601 CC lib/vmd/led.o 00:02:56.601 CC lib/vmd/vmd.o 00:02:56.601 CC lib/rdma_utils/rdma_utils.o 00:02:56.601 CC lib/conf/conf.o 00:02:56.601 CC lib/env_dpdk/env.o 00:02:56.601 CC lib/env_dpdk/memory.o 00:02:56.601 CC lib/idxd/idxd.o 00:02:56.860 CC lib/env_dpdk/pci.o 00:02:56.860 LIB libspdk_conf.a 00:02:56.860 CC lib/env_dpdk/init.o 00:02:56.860 CC lib/idxd/idxd_user.o 00:02:56.860 SO libspdk_conf.so.6.0 00:02:56.860 LIB libspdk_rdma_utils.a 00:02:56.860 LIB libspdk_json.a 00:02:56.860 SO libspdk_rdma_utils.so.1.0 00:02:56.860 SYMLINK libspdk_conf.so 00:02:56.860 CC lib/env_dpdk/threads.o 00:02:56.860 SO libspdk_json.so.6.0 00:02:56.860 SYMLINK libspdk_rdma_utils.so 00:02:56.860 CC lib/env_dpdk/pci_ioat.o 00:02:57.119 SYMLINK libspdk_json.so 00:02:57.119 CC lib/env_dpdk/pci_virtio.o 00:02:57.119 CC lib/env_dpdk/pci_vmd.o 00:02:57.119 CC lib/env_dpdk/pci_idxd.o 00:02:57.119 CC lib/env_dpdk/pci_event.o 00:02:57.119 CC lib/env_dpdk/sigbus_handler.o 00:02:57.119 CC lib/env_dpdk/pci_dpdk.o 00:02:57.119 CC lib/idxd/idxd_kernel.o 00:02:57.119 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:57.119 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.377 CC lib/rdma_provider/common.o 00:02:57.377 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:57.377 LIB libspdk_vmd.a 00:02:57.377 LIB libspdk_idxd.a 00:02:57.377 SO libspdk_vmd.so.6.0 00:02:57.377 SO libspdk_idxd.so.12.1 00:02:57.377 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.377 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.377 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.377 SYMLINK libspdk_vmd.so 00:02:57.377 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.377 SYMLINK libspdk_idxd.so 00:02:57.637 LIB libspdk_rdma_provider.a 00:02:57.637 SO libspdk_rdma_provider.so.7.0 00:02:57.637 SYMLINK libspdk_rdma_provider.so 00:02:57.637 LIB libspdk_jsonrpc.a 00:02:57.637 SO libspdk_jsonrpc.so.6.0 00:02:57.896 SYMLINK libspdk_jsonrpc.so 00:02:58.155 LIB libspdk_env_dpdk.a 00:02:58.155 CC lib/rpc/rpc.o 00:02:58.155 SO libspdk_env_dpdk.so.15.1 00:02:58.415 SYMLINK libspdk_env_dpdk.so 00:02:58.415 LIB libspdk_rpc.a 00:02:58.415 SO libspdk_rpc.so.6.0 00:02:58.675 SYMLINK libspdk_rpc.so 00:02:58.934 CC lib/notify/notify.o 00:02:58.934 CC lib/notify/notify_rpc.o 00:02:58.934 CC lib/trace/trace.o 00:02:58.934 CC lib/trace/trace_rpc.o 00:02:58.934 CC lib/trace/trace_flags.o 00:02:58.934 CC lib/keyring/keyring.o 00:02:58.934 CC lib/keyring/keyring_rpc.o 00:02:58.934 LIB libspdk_notify.a 00:02:58.934 SO libspdk_notify.so.6.0 00:02:59.194 SYMLINK libspdk_notify.so 00:02:59.194 LIB libspdk_keyring.a 00:02:59.194 LIB libspdk_trace.a 00:02:59.194 SO libspdk_keyring.so.2.0 00:02:59.194 SO libspdk_trace.so.11.0 00:02:59.194 SYMLINK libspdk_keyring.so 00:02:59.194 SYMLINK libspdk_trace.so 00:02:59.762 CC lib/thread/iobuf.o 00:02:59.762 CC lib/thread/thread.o 00:02:59.762 CC lib/sock/sock.o 00:02:59.762 CC lib/sock/sock_rpc.o 00:03:00.023 LIB libspdk_sock.a 00:03:00.023 SO libspdk_sock.so.10.0 00:03:00.283 SYMLINK libspdk_sock.so 00:03:00.542 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:00.542 CC lib/nvme/nvme_ctrlr.o 00:03:00.542 CC lib/nvme/nvme_fabric.o 00:03:00.542 CC lib/nvme/nvme_ns_cmd.o 00:03:00.542 CC lib/nvme/nvme_ns.o 00:03:00.542 CC lib/nvme/nvme_pcie_common.o 00:03:00.542 CC lib/nvme/nvme_pcie.o 00:03:00.542 CC lib/nvme/nvme.o 00:03:00.542 CC lib/nvme/nvme_qpair.o 00:03:01.111 LIB libspdk_thread.a 00:03:01.111 SO libspdk_thread.so.11.0 00:03:01.371 CC lib/nvme/nvme_quirks.o 00:03:01.371 CC lib/nvme/nvme_transport.o 00:03:01.371 SYMLINK libspdk_thread.so 00:03:01.371 CC lib/nvme/nvme_discovery.o 00:03:01.371 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:01.371 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:01.371 CC lib/nvme/nvme_tcp.o 00:03:01.630 CC lib/nvme/nvme_opal.o 00:03:01.630 CC lib/nvme/nvme_io_msg.o 00:03:01.630 CC lib/nvme/nvme_poll_group.o 00:03:01.630 CC lib/nvme/nvme_zns.o 00:03:01.891 CC lib/nvme/nvme_stubs.o 00:03:01.891 CC lib/nvme/nvme_auth.o 00:03:01.891 CC lib/nvme/nvme_cuse.o 00:03:02.149 CC lib/accel/accel.o 00:03:02.149 CC lib/accel/accel_rpc.o 00:03:02.149 CC lib/accel/accel_sw.o 00:03:02.407 CC lib/nvme/nvme_rdma.o 00:03:02.407 CC lib/blob/blobstore.o 00:03:02.407 CC lib/init/json_config.o 00:03:02.407 CC lib/init/subsystem.o 00:03:02.666 CC lib/virtio/virtio.o 00:03:02.666 CC lib/init/subsystem_rpc.o 00:03:02.666 CC lib/fsdev/fsdev.o 00:03:02.666 CC lib/fsdev/fsdev_io.o 00:03:02.926 CC lib/init/rpc.o 00:03:02.926 CC lib/fsdev/fsdev_rpc.o 00:03:02.926 CC lib/blob/request.o 00:03:02.926 LIB libspdk_init.a 00:03:02.926 CC lib/virtio/virtio_vhost_user.o 00:03:02.926 CC lib/virtio/virtio_vfio_user.o 00:03:02.926 SO libspdk_init.so.6.0 00:03:03.208 CC lib/blob/zeroes.o 00:03:03.208 SYMLINK libspdk_init.so 00:03:03.208 CC lib/virtio/virtio_pci.o 00:03:03.208 CC lib/blob/blob_bs_dev.o 00:03:03.208 LIB libspdk_accel.a 00:03:03.208 SO libspdk_accel.so.16.0 00:03:03.485 LIB libspdk_virtio.a 00:03:03.485 SYMLINK libspdk_accel.so 00:03:03.485 CC lib/event/app_rpc.o 00:03:03.485 CC lib/event/reactor.o 00:03:03.485 CC lib/event/log_rpc.o 00:03:03.485 CC lib/event/app.o 00:03:03.485 CC lib/event/scheduler_static.o 00:03:03.485 SO libspdk_virtio.so.7.0 00:03:03.485 LIB libspdk_fsdev.a 00:03:03.485 SO libspdk_fsdev.so.2.0 00:03:03.485 SYMLINK libspdk_virtio.so 00:03:03.485 CC lib/bdev/bdev.o 00:03:03.485 SYMLINK libspdk_fsdev.so 00:03:03.485 CC lib/bdev/bdev_rpc.o 00:03:03.485 CC lib/bdev/bdev_zone.o 00:03:03.485 CC lib/bdev/part.o 00:03:03.485 CC lib/bdev/scsi_nvme.o 00:03:03.744 LIB libspdk_nvme.a 00:03:03.744 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:04.004 LIB libspdk_event.a 00:03:04.004 SO libspdk_event.so.14.0 00:03:04.004 SO libspdk_nvme.so.15.0 00:03:04.004 SYMLINK libspdk_event.so 00:03:04.263 SYMLINK libspdk_nvme.so 00:03:04.523 LIB libspdk_fuse_dispatcher.a 00:03:04.523 SO libspdk_fuse_dispatcher.so.1.0 00:03:04.523 SYMLINK libspdk_fuse_dispatcher.so 00:03:05.906 LIB libspdk_blob.a 00:03:05.906 SO libspdk_blob.so.12.0 00:03:06.166 SYMLINK libspdk_blob.so 00:03:06.426 LIB libspdk_bdev.a 00:03:06.426 SO libspdk_bdev.so.17.0 00:03:06.426 CC lib/lvol/lvol.o 00:03:06.426 CC lib/blobfs/blobfs.o 00:03:06.426 CC lib/blobfs/tree.o 00:03:06.685 SYMLINK libspdk_bdev.so 00:03:06.685 CC lib/nbd/nbd.o 00:03:06.685 CC lib/scsi/dev.o 00:03:06.685 CC lib/nbd/nbd_rpc.o 00:03:06.685 CC lib/ftl/ftl_init.o 00:03:06.685 CC lib/ftl/ftl_layout.o 00:03:06.685 CC lib/ftl/ftl_core.o 00:03:06.685 CC lib/nvmf/ctrlr.o 00:03:06.685 CC lib/ublk/ublk.o 00:03:06.945 CC lib/ftl/ftl_debug.o 00:03:06.945 CC lib/ftl/ftl_io.o 00:03:06.945 CC lib/scsi/lun.o 00:03:07.207 CC lib/ftl/ftl_sb.o 00:03:07.207 CC lib/scsi/port.o 00:03:07.207 CC lib/ftl/ftl_l2p.o 00:03:07.207 CC lib/scsi/scsi.o 00:03:07.207 LIB libspdk_nbd.a 00:03:07.207 SO libspdk_nbd.so.7.0 00:03:07.207 CC lib/ftl/ftl_l2p_flat.o 00:03:07.207 CC lib/scsi/scsi_bdev.o 00:03:07.207 CC lib/scsi/scsi_pr.o 00:03:07.207 SYMLINK libspdk_nbd.so 00:03:07.207 CC lib/scsi/scsi_rpc.o 00:03:07.485 CC lib/nvmf/ctrlr_discovery.o 00:03:07.485 LIB libspdk_blobfs.a 00:03:07.485 SO libspdk_blobfs.so.11.0 00:03:07.485 CC lib/nvmf/ctrlr_bdev.o 00:03:07.485 CC lib/scsi/task.o 00:03:07.485 CC lib/ftl/ftl_nv_cache.o 00:03:07.485 CC lib/ublk/ublk_rpc.o 00:03:07.485 SYMLINK libspdk_blobfs.so 00:03:07.485 CC lib/ftl/ftl_band.o 00:03:07.485 LIB libspdk_lvol.a 00:03:07.485 SO libspdk_lvol.so.11.0 00:03:07.485 SYMLINK libspdk_lvol.so 00:03:07.485 CC lib/ftl/ftl_band_ops.o 00:03:07.743 LIB libspdk_ublk.a 00:03:07.743 CC lib/ftl/ftl_writer.o 00:03:07.743 SO libspdk_ublk.so.3.0 00:03:07.743 CC lib/ftl/ftl_rq.o 00:03:07.743 SYMLINK libspdk_ublk.so 00:03:07.743 CC lib/ftl/ftl_reloc.o 00:03:07.743 LIB libspdk_scsi.a 00:03:07.743 CC lib/ftl/ftl_l2p_cache.o 00:03:07.743 CC lib/nvmf/subsystem.o 00:03:07.743 CC lib/nvmf/nvmf.o 00:03:08.002 SO libspdk_scsi.so.9.0 00:03:08.002 CC lib/ftl/ftl_p2l.o 00:03:08.002 CC lib/ftl/ftl_p2l_log.o 00:03:08.002 SYMLINK libspdk_scsi.so 00:03:08.002 CC lib/nvmf/nvmf_rpc.o 00:03:08.002 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.002 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.262 CC lib/iscsi/conn.o 00:03:08.262 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.262 CC lib/iscsi/init_grp.o 00:03:08.521 CC lib/vhost/vhost.o 00:03:08.521 CC lib/vhost/vhost_rpc.o 00:03:08.521 CC lib/iscsi/iscsi.o 00:03:08.521 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.521 CC lib/iscsi/param.o 00:03:08.780 CC lib/nvmf/transport.o 00:03:08.780 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.780 CC lib/nvmf/tcp.o 00:03:08.780 CC lib/vhost/vhost_scsi.o 00:03:09.039 CC lib/iscsi/portal_grp.o 00:03:09.039 CC lib/vhost/vhost_blk.o 00:03:09.039 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:09.039 CC lib/nvmf/stubs.o 00:03:09.299 CC lib/iscsi/tgt_node.o 00:03:09.299 CC lib/vhost/rte_vhost_user.o 00:03:09.299 CC lib/nvmf/mdns_server.o 00:03:09.299 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:09.299 CC lib/iscsi/iscsi_subsystem.o 00:03:09.559 CC lib/nvmf/rdma.o 00:03:09.559 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:09.559 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:09.818 CC lib/nvmf/auth.o 00:03:09.818 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:09.818 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:09.818 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:09.818 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:09.818 CC lib/ftl/utils/ftl_conf.o 00:03:09.818 CC lib/ftl/utils/ftl_md.o 00:03:10.077 CC lib/iscsi/iscsi_rpc.o 00:03:10.077 CC lib/ftl/utils/ftl_mempool.o 00:03:10.077 CC lib/iscsi/task.o 00:03:10.077 CC lib/ftl/utils/ftl_bitmap.o 00:03:10.336 CC lib/ftl/utils/ftl_property.o 00:03:10.336 LIB libspdk_vhost.a 00:03:10.336 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:10.336 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:10.336 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:10.336 SO libspdk_vhost.so.8.0 00:03:10.336 LIB libspdk_iscsi.a 00:03:10.336 SYMLINK libspdk_vhost.so 00:03:10.336 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:10.336 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:10.336 SO libspdk_iscsi.so.8.0 00:03:10.596 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:10.596 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:10.596 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:10.596 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:10.596 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:10.596 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:10.596 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:10.596 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:10.596 SYMLINK libspdk_iscsi.so 00:03:10.596 CC lib/ftl/base/ftl_base_dev.o 00:03:10.596 CC lib/ftl/base/ftl_base_bdev.o 00:03:10.596 CC lib/ftl/ftl_trace.o 00:03:10.855 LIB libspdk_ftl.a 00:03:11.114 SO libspdk_ftl.so.9.0 00:03:11.373 SYMLINK libspdk_ftl.so 00:03:11.940 LIB libspdk_nvmf.a 00:03:11.940 SO libspdk_nvmf.so.20.0 00:03:12.198 SYMLINK libspdk_nvmf.so 00:03:12.457 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.716 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.716 CC module/fsdev/aio/fsdev_aio.o 00:03:12.716 CC module/keyring/file/keyring.o 00:03:12.716 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.716 CC module/keyring/linux/keyring.o 00:03:12.716 CC module/accel/error/accel_error.o 00:03:12.716 CC module/sock/posix/posix.o 00:03:12.716 CC module/scheduler/gscheduler/gscheduler.o 00:03:12.716 CC module/blob/bdev/blob_bdev.o 00:03:12.716 LIB libspdk_env_dpdk_rpc.a 00:03:12.716 SO libspdk_env_dpdk_rpc.so.6.0 00:03:12.716 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.716 CC module/keyring/linux/keyring_rpc.o 00:03:12.716 CC module/keyring/file/keyring_rpc.o 00:03:12.716 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.716 LIB libspdk_scheduler_gscheduler.a 00:03:12.716 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:12.975 SO libspdk_scheduler_gscheduler.so.4.0 00:03:12.975 LIB libspdk_scheduler_dynamic.a 00:03:12.975 CC module/accel/error/accel_error_rpc.o 00:03:12.975 SO libspdk_scheduler_dynamic.so.4.0 00:03:12.975 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:12.975 SYMLINK libspdk_scheduler_gscheduler.so 00:03:12.975 LIB libspdk_keyring_linux.a 00:03:12.975 LIB libspdk_keyring_file.a 00:03:12.975 SYMLINK libspdk_scheduler_dynamic.so 00:03:12.975 SO libspdk_keyring_linux.so.1.0 00:03:12.975 CC module/accel/ioat/accel_ioat.o 00:03:12.975 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:12.975 SO libspdk_keyring_file.so.2.0 00:03:12.975 LIB libspdk_blob_bdev.a 00:03:12.975 SO libspdk_blob_bdev.so.12.0 00:03:12.975 LIB libspdk_accel_error.a 00:03:12.975 SYMLINK libspdk_keyring_linux.so 00:03:12.975 CC module/fsdev/aio/linux_aio_mgr.o 00:03:12.975 SYMLINK libspdk_keyring_file.so 00:03:12.975 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.975 SO libspdk_accel_error.so.2.0 00:03:12.975 CC module/accel/dsa/accel_dsa.o 00:03:12.975 SYMLINK libspdk_blob_bdev.so 00:03:12.975 CC module/accel/dsa/accel_dsa_rpc.o 00:03:12.975 CC module/accel/iaa/accel_iaa.o 00:03:13.234 SYMLINK libspdk_accel_error.so 00:03:13.234 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.234 LIB libspdk_accel_ioat.a 00:03:13.234 SO libspdk_accel_ioat.so.6.0 00:03:13.234 SYMLINK libspdk_accel_ioat.so 00:03:13.234 LIB libspdk_accel_iaa.a 00:03:13.234 SO libspdk_accel_iaa.so.3.0 00:03:13.494 LIB libspdk_accel_dsa.a 00:03:13.494 CC module/bdev/delay/vbdev_delay.o 00:03:13.494 CC module/blobfs/bdev/blobfs_bdev.o 00:03:13.494 CC module/bdev/error/vbdev_error.o 00:03:13.494 SO libspdk_accel_dsa.so.5.0 00:03:13.494 SYMLINK libspdk_accel_iaa.so 00:03:13.494 CC module/bdev/error/vbdev_error_rpc.o 00:03:13.494 CC module/bdev/gpt/gpt.o 00:03:13.494 LIB libspdk_fsdev_aio.a 00:03:13.494 CC module/bdev/malloc/bdev_malloc.o 00:03:13.494 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.494 LIB libspdk_sock_posix.a 00:03:13.494 SO libspdk_fsdev_aio.so.1.0 00:03:13.494 SYMLINK libspdk_accel_dsa.so 00:03:13.494 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:13.494 SO libspdk_sock_posix.so.6.0 00:03:13.494 SYMLINK libspdk_fsdev_aio.so 00:03:13.494 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:13.753 SYMLINK libspdk_sock_posix.so 00:03:13.753 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.753 LIB libspdk_blobfs_bdev.a 00:03:13.753 LIB libspdk_bdev_error.a 00:03:13.753 SO libspdk_blobfs_bdev.so.6.0 00:03:13.753 SO libspdk_bdev_error.so.6.0 00:03:13.753 CC module/bdev/null/bdev_null.o 00:03:13.753 CC module/bdev/passthru/vbdev_passthru.o 00:03:13.753 CC module/bdev/nvme/bdev_nvme.o 00:03:13.753 SYMLINK libspdk_blobfs_bdev.so 00:03:13.753 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:13.753 SYMLINK libspdk_bdev_error.so 00:03:13.753 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:13.753 LIB libspdk_bdev_malloc.a 00:03:14.012 SO libspdk_bdev_malloc.so.6.0 00:03:14.012 CC module/bdev/raid/bdev_raid.o 00:03:14.012 LIB libspdk_bdev_gpt.a 00:03:14.012 CC module/bdev/split/vbdev_split.o 00:03:14.012 SYMLINK libspdk_bdev_malloc.so 00:03:14.012 CC module/bdev/raid/bdev_raid_rpc.o 00:03:14.012 SO libspdk_bdev_gpt.so.6.0 00:03:14.012 LIB libspdk_bdev_delay.a 00:03:14.012 SO libspdk_bdev_delay.so.6.0 00:03:14.012 SYMLINK libspdk_bdev_gpt.so 00:03:14.012 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:14.012 CC module/bdev/raid/bdev_raid_sb.o 00:03:14.012 CC module/bdev/null/bdev_null_rpc.o 00:03:14.012 SYMLINK libspdk_bdev_delay.so 00:03:14.012 CC module/bdev/raid/raid0.o 00:03:14.012 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.272 CC module/bdev/raid/raid1.o 00:03:14.272 CC module/bdev/split/vbdev_split_rpc.o 00:03:14.272 LIB libspdk_bdev_null.a 00:03:14.272 SO libspdk_bdev_null.so.6.0 00:03:14.272 LIB libspdk_bdev_passthru.a 00:03:14.272 SO libspdk_bdev_passthru.so.6.0 00:03:14.272 SYMLINK libspdk_bdev_null.so 00:03:14.272 CC module/bdev/raid/concat.o 00:03:14.272 CC module/bdev/raid/raid5f.o 00:03:14.272 CC module/bdev/nvme/nvme_rpc.o 00:03:14.272 LIB libspdk_bdev_split.a 00:03:14.272 SYMLINK libspdk_bdev_passthru.so 00:03:14.531 SO libspdk_bdev_split.so.6.0 00:03:14.531 LIB libspdk_bdev_lvol.a 00:03:14.532 SYMLINK libspdk_bdev_split.so 00:03:14.532 CC module/bdev/nvme/bdev_mdns_client.o 00:03:14.532 SO libspdk_bdev_lvol.so.6.0 00:03:14.532 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:14.532 CC module/bdev/nvme/vbdev_opal.o 00:03:14.532 SYMLINK libspdk_bdev_lvol.so 00:03:14.532 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:14.532 CC module/bdev/aio/bdev_aio.o 00:03:14.532 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:14.791 CC module/bdev/ftl/bdev_ftl.o 00:03:14.791 CC module/bdev/iscsi/bdev_iscsi.o 00:03:14.791 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:14.791 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:14.791 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.051 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.051 LIB libspdk_bdev_zone_block.a 00:03:15.051 SO libspdk_bdev_zone_block.so.6.0 00:03:15.051 LIB libspdk_bdev_ftl.a 00:03:15.051 SYMLINK libspdk_bdev_zone_block.so 00:03:15.051 SO libspdk_bdev_ftl.so.6.0 00:03:15.051 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:15.051 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:15.051 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:15.051 LIB libspdk_bdev_aio.a 00:03:15.051 LIB libspdk_bdev_raid.a 00:03:15.051 SO libspdk_bdev_aio.so.6.0 00:03:15.051 SYMLINK libspdk_bdev_ftl.so 00:03:15.311 LIB libspdk_bdev_iscsi.a 00:03:15.311 SO libspdk_bdev_raid.so.6.0 00:03:15.311 SO libspdk_bdev_iscsi.so.6.0 00:03:15.311 SYMLINK libspdk_bdev_aio.so 00:03:15.311 SYMLINK libspdk_bdev_raid.so 00:03:15.311 SYMLINK libspdk_bdev_iscsi.so 00:03:15.881 LIB libspdk_bdev_virtio.a 00:03:15.881 SO libspdk_bdev_virtio.so.6.0 00:03:15.881 SYMLINK libspdk_bdev_virtio.so 00:03:16.823 LIB libspdk_bdev_nvme.a 00:03:17.084 SO libspdk_bdev_nvme.so.7.1 00:03:17.084 SYMLINK libspdk_bdev_nvme.so 00:03:17.692 CC module/event/subsystems/vmd/vmd.o 00:03:17.692 CC module/event/subsystems/sock/sock.o 00:03:17.692 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:17.692 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:17.692 CC module/event/subsystems/scheduler/scheduler.o 00:03:17.692 CC module/event/subsystems/iobuf/iobuf.o 00:03:17.692 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:17.692 CC module/event/subsystems/keyring/keyring.o 00:03:17.692 CC module/event/subsystems/fsdev/fsdev.o 00:03:17.692 LIB libspdk_event_keyring.a 00:03:17.952 LIB libspdk_event_vhost_blk.a 00:03:17.952 LIB libspdk_event_fsdev.a 00:03:17.952 LIB libspdk_event_sock.a 00:03:17.952 LIB libspdk_event_vmd.a 00:03:17.952 LIB libspdk_event_scheduler.a 00:03:17.952 SO libspdk_event_keyring.so.1.0 00:03:17.952 SO libspdk_event_fsdev.so.1.0 00:03:17.952 LIB libspdk_event_iobuf.a 00:03:17.952 SO libspdk_event_sock.so.5.0 00:03:17.952 SO libspdk_event_vhost_blk.so.3.0 00:03:17.952 SO libspdk_event_scheduler.so.4.0 00:03:17.952 SO libspdk_event_vmd.so.6.0 00:03:17.952 SO libspdk_event_iobuf.so.3.0 00:03:17.952 SYMLINK libspdk_event_keyring.so 00:03:17.952 SYMLINK libspdk_event_sock.so 00:03:17.952 SYMLINK libspdk_event_fsdev.so 00:03:17.952 SYMLINK libspdk_event_vhost_blk.so 00:03:17.952 SYMLINK libspdk_event_scheduler.so 00:03:17.952 SYMLINK libspdk_event_vmd.so 00:03:17.952 SYMLINK libspdk_event_iobuf.so 00:03:18.520 CC module/event/subsystems/accel/accel.o 00:03:18.520 LIB libspdk_event_accel.a 00:03:18.520 SO libspdk_event_accel.so.6.0 00:03:18.780 SYMLINK libspdk_event_accel.so 00:03:19.039 CC module/event/subsystems/bdev/bdev.o 00:03:19.300 LIB libspdk_event_bdev.a 00:03:19.300 SO libspdk_event_bdev.so.6.0 00:03:19.300 SYMLINK libspdk_event_bdev.so 00:03:19.871 CC module/event/subsystems/nbd/nbd.o 00:03:19.871 CC module/event/subsystems/scsi/scsi.o 00:03:19.871 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:19.871 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:19.871 CC module/event/subsystems/ublk/ublk.o 00:03:19.871 LIB libspdk_event_nbd.a 00:03:19.871 SO libspdk_event_nbd.so.6.0 00:03:19.871 LIB libspdk_event_scsi.a 00:03:19.871 LIB libspdk_event_ublk.a 00:03:19.871 SYMLINK libspdk_event_nbd.so 00:03:19.871 SO libspdk_event_scsi.so.6.0 00:03:19.871 SO libspdk_event_ublk.so.3.0 00:03:20.131 LIB libspdk_event_nvmf.a 00:03:20.131 SYMLINK libspdk_event_ublk.so 00:03:20.131 SYMLINK libspdk_event_scsi.so 00:03:20.131 SO libspdk_event_nvmf.so.6.0 00:03:20.131 SYMLINK libspdk_event_nvmf.so 00:03:20.391 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.392 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:20.653 LIB libspdk_event_vhost_scsi.a 00:03:20.653 LIB libspdk_event_iscsi.a 00:03:20.653 SO libspdk_event_vhost_scsi.so.3.0 00:03:20.653 SO libspdk_event_iscsi.so.6.0 00:03:20.653 SYMLINK libspdk_event_vhost_scsi.so 00:03:20.653 SYMLINK libspdk_event_iscsi.so 00:03:20.912 SO libspdk.so.6.0 00:03:20.912 SYMLINK libspdk.so 00:03:21.173 CXX app/trace/trace.o 00:03:21.173 CC app/spdk_lspci/spdk_lspci.o 00:03:21.173 CC app/spdk_nvme_perf/perf.o 00:03:21.173 CC app/trace_record/trace_record.o 00:03:21.173 CC app/spdk_nvme_identify/identify.o 00:03:21.173 CC app/iscsi_tgt/iscsi_tgt.o 00:03:21.173 CC app/spdk_tgt/spdk_tgt.o 00:03:21.173 CC app/nvmf_tgt/nvmf_main.o 00:03:21.173 CC test/thread/poller_perf/poller_perf.o 00:03:21.173 CC examples/util/zipf/zipf.o 00:03:21.173 LINK spdk_lspci 00:03:21.432 LINK nvmf_tgt 00:03:21.433 LINK poller_perf 00:03:21.433 LINK iscsi_tgt 00:03:21.433 LINK spdk_tgt 00:03:21.433 LINK zipf 00:03:21.433 LINK spdk_trace_record 00:03:21.433 LINK spdk_trace 00:03:21.692 CC app/spdk_nvme_discover/discovery_aer.o 00:03:21.692 CC app/spdk_top/spdk_top.o 00:03:21.692 CC app/spdk_dd/spdk_dd.o 00:03:21.692 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:21.692 LINK spdk_nvme_discover 00:03:21.692 CC examples/ioat/perf/perf.o 00:03:21.950 CC test/dma/test_dma/test_dma.o 00:03:21.950 CC app/fio/nvme/fio_plugin.o 00:03:21.950 CC app/fio/bdev/fio_plugin.o 00:03:21.950 LINK interrupt_tgt 00:03:22.208 LINK ioat_perf 00:03:22.208 LINK spdk_nvme_perf 00:03:22.208 CC app/vhost/vhost.o 00:03:22.208 LINK spdk_dd 00:03:22.208 CC examples/ioat/verify/verify.o 00:03:22.467 LINK vhost 00:03:22.467 LINK spdk_nvme_identify 00:03:22.467 LINK test_dma 00:03:22.467 LINK spdk_bdev 00:03:22.467 CC examples/sock/hello_world/hello_sock.o 00:03:22.467 LINK spdk_nvme 00:03:22.467 CC examples/thread/thread/thread_ex.o 00:03:22.726 LINK verify 00:03:22.726 CC examples/vmd/lsvmd/lsvmd.o 00:03:22.726 CC examples/vmd/led/led.o 00:03:22.726 TEST_HEADER include/spdk/accel.h 00:03:22.726 TEST_HEADER include/spdk/accel_module.h 00:03:22.726 TEST_HEADER include/spdk/assert.h 00:03:22.726 TEST_HEADER include/spdk/barrier.h 00:03:22.726 TEST_HEADER include/spdk/base64.h 00:03:22.726 TEST_HEADER include/spdk/bdev.h 00:03:22.726 TEST_HEADER include/spdk/bdev_module.h 00:03:22.726 TEST_HEADER include/spdk/bdev_zone.h 00:03:22.726 TEST_HEADER include/spdk/bit_array.h 00:03:22.726 TEST_HEADER include/spdk/bit_pool.h 00:03:22.726 TEST_HEADER include/spdk/blob_bdev.h 00:03:22.726 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:22.726 TEST_HEADER include/spdk/blobfs.h 00:03:22.726 TEST_HEADER include/spdk/blob.h 00:03:22.726 TEST_HEADER include/spdk/conf.h 00:03:22.726 TEST_HEADER include/spdk/config.h 00:03:22.726 TEST_HEADER include/spdk/cpuset.h 00:03:22.726 LINK spdk_top 00:03:22.726 TEST_HEADER include/spdk/crc16.h 00:03:22.726 TEST_HEADER include/spdk/crc32.h 00:03:22.726 TEST_HEADER include/spdk/crc64.h 00:03:22.726 TEST_HEADER include/spdk/dif.h 00:03:22.726 TEST_HEADER include/spdk/dma.h 00:03:22.726 TEST_HEADER include/spdk/endian.h 00:03:22.726 TEST_HEADER include/spdk/env_dpdk.h 00:03:22.726 TEST_HEADER include/spdk/env.h 00:03:22.726 TEST_HEADER include/spdk/event.h 00:03:22.726 TEST_HEADER include/spdk/fd_group.h 00:03:22.726 TEST_HEADER include/spdk/fd.h 00:03:22.726 TEST_HEADER include/spdk/file.h 00:03:22.726 TEST_HEADER include/spdk/fsdev.h 00:03:22.726 LINK hello_sock 00:03:22.726 TEST_HEADER include/spdk/fsdev_module.h 00:03:22.726 TEST_HEADER include/spdk/ftl.h 00:03:22.726 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:22.726 CC examples/idxd/perf/perf.o 00:03:22.726 TEST_HEADER include/spdk/gpt_spec.h 00:03:22.726 CC test/app/bdev_svc/bdev_svc.o 00:03:22.726 TEST_HEADER include/spdk/hexlify.h 00:03:22.726 TEST_HEADER include/spdk/histogram_data.h 00:03:22.726 TEST_HEADER include/spdk/idxd.h 00:03:22.726 TEST_HEADER include/spdk/idxd_spec.h 00:03:22.726 TEST_HEADER include/spdk/init.h 00:03:22.726 TEST_HEADER include/spdk/ioat.h 00:03:22.726 TEST_HEADER include/spdk/ioat_spec.h 00:03:22.726 TEST_HEADER include/spdk/iscsi_spec.h 00:03:22.726 LINK thread 00:03:22.726 TEST_HEADER include/spdk/json.h 00:03:22.726 TEST_HEADER include/spdk/jsonrpc.h 00:03:22.726 TEST_HEADER include/spdk/keyring.h 00:03:22.726 TEST_HEADER include/spdk/keyring_module.h 00:03:22.726 TEST_HEADER include/spdk/likely.h 00:03:22.726 TEST_HEADER include/spdk/log.h 00:03:22.726 TEST_HEADER include/spdk/lvol.h 00:03:22.726 TEST_HEADER include/spdk/md5.h 00:03:22.726 LINK lsvmd 00:03:22.726 TEST_HEADER include/spdk/memory.h 00:03:22.726 TEST_HEADER include/spdk/mmio.h 00:03:22.985 TEST_HEADER include/spdk/nbd.h 00:03:22.985 TEST_HEADER include/spdk/net.h 00:03:22.985 TEST_HEADER include/spdk/notify.h 00:03:22.985 TEST_HEADER include/spdk/nvme.h 00:03:22.985 TEST_HEADER include/spdk/nvme_intel.h 00:03:22.985 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:22.985 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:22.985 TEST_HEADER include/spdk/nvme_spec.h 00:03:22.985 TEST_HEADER include/spdk/nvme_zns.h 00:03:22.985 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:22.985 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:22.985 TEST_HEADER include/spdk/nvmf.h 00:03:22.986 TEST_HEADER include/spdk/nvmf_spec.h 00:03:22.986 CC test/app/histogram_perf/histogram_perf.o 00:03:22.986 TEST_HEADER include/spdk/nvmf_transport.h 00:03:22.986 TEST_HEADER include/spdk/opal.h 00:03:22.986 TEST_HEADER include/spdk/opal_spec.h 00:03:22.986 TEST_HEADER include/spdk/pci_ids.h 00:03:22.986 TEST_HEADER include/spdk/pipe.h 00:03:22.986 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:22.986 TEST_HEADER include/spdk/queue.h 00:03:22.986 TEST_HEADER include/spdk/reduce.h 00:03:22.986 TEST_HEADER include/spdk/rpc.h 00:03:22.986 TEST_HEADER include/spdk/scheduler.h 00:03:22.986 TEST_HEADER include/spdk/scsi.h 00:03:22.986 LINK led 00:03:22.986 TEST_HEADER include/spdk/scsi_spec.h 00:03:22.986 TEST_HEADER include/spdk/sock.h 00:03:22.986 TEST_HEADER include/spdk/stdinc.h 00:03:22.986 TEST_HEADER include/spdk/string.h 00:03:22.986 TEST_HEADER include/spdk/thread.h 00:03:22.986 TEST_HEADER include/spdk/trace.h 00:03:22.986 TEST_HEADER include/spdk/trace_parser.h 00:03:22.986 TEST_HEADER include/spdk/tree.h 00:03:22.986 TEST_HEADER include/spdk/ublk.h 00:03:22.986 TEST_HEADER include/spdk/util.h 00:03:22.986 TEST_HEADER include/spdk/uuid.h 00:03:22.986 TEST_HEADER include/spdk/version.h 00:03:22.986 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.986 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.986 TEST_HEADER include/spdk/vhost.h 00:03:22.986 TEST_HEADER include/spdk/vmd.h 00:03:22.986 TEST_HEADER include/spdk/xor.h 00:03:22.986 TEST_HEADER include/spdk/zipf.h 00:03:22.986 CXX test/cpp_headers/accel.o 00:03:22.986 LINK bdev_svc 00:03:22.986 LINK histogram_perf 00:03:22.986 CC test/app/jsoncat/jsoncat.o 00:03:22.986 CC test/app/stub/stub.o 00:03:23.245 CXX test/cpp_headers/accel_module.o 00:03:23.245 LINK idxd_perf 00:03:23.245 LINK jsoncat 00:03:23.245 CC test/event/event_perf/event_perf.o 00:03:23.245 LINK stub 00:03:23.245 CXX test/cpp_headers/assert.o 00:03:23.245 CC examples/nvme/hello_world/hello_world.o 00:03:23.245 CC test/env/vtophys/vtophys.o 00:03:23.245 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:23.245 CC test/env/mem_callbacks/mem_callbacks.o 00:03:23.504 LINK nvme_fuzz 00:03:23.504 LINK event_perf 00:03:23.504 LINK vtophys 00:03:23.504 CXX test/cpp_headers/barrier.o 00:03:23.504 LINK env_dpdk_post_init 00:03:23.504 CC test/env/memory/memory_ut.o 00:03:23.504 LINK hello_world 00:03:23.504 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:23.504 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:23.763 CXX test/cpp_headers/base64.o 00:03:23.763 CC examples/nvme/reconnect/reconnect.o 00:03:23.763 CC test/event/reactor/reactor.o 00:03:23.763 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:23.763 CXX test/cpp_headers/bdev.o 00:03:23.763 CC test/nvme/aer/aer.o 00:03:23.763 LINK reactor 00:03:23.763 CC test/rpc_client/rpc_client_test.o 00:03:24.022 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.022 LINK hello_fsdev 00:03:24.022 LINK mem_callbacks 00:03:24.022 CXX test/cpp_headers/bdev_module.o 00:03:24.022 LINK rpc_client_test 00:03:24.280 LINK reconnect 00:03:24.280 CC test/event/reactor_perf/reactor_perf.o 00:03:24.280 LINK aer 00:03:24.280 CC test/nvme/reset/reset.o 00:03:24.280 CXX test/cpp_headers/bdev_zone.o 00:03:24.280 CXX test/cpp_headers/bit_array.o 00:03:24.280 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:24.280 LINK reactor_perf 00:03:24.280 LINK vhost_fuzz 00:03:24.539 CC test/nvme/sgl/sgl.o 00:03:24.539 CC test/nvme/e2edp/nvme_dp.o 00:03:24.539 CXX test/cpp_headers/bit_pool.o 00:03:24.539 LINK reset 00:03:24.539 CC test/env/pci/pci_ut.o 00:03:24.798 CC test/event/app_repeat/app_repeat.o 00:03:24.798 CXX test/cpp_headers/blob_bdev.o 00:03:24.798 CC test/accel/dif/dif.o 00:03:24.798 LINK sgl 00:03:24.798 LINK nvme_dp 00:03:24.798 LINK app_repeat 00:03:24.798 CC test/event/scheduler/scheduler.o 00:03:25.056 LINK memory_ut 00:03:25.056 CXX test/cpp_headers/blobfs_bdev.o 00:03:25.056 LINK nvme_manage 00:03:25.056 LINK pci_ut 00:03:25.056 CC test/nvme/overhead/overhead.o 00:03:25.056 CC test/nvme/err_injection/err_injection.o 00:03:25.056 CXX test/cpp_headers/blobfs.o 00:03:25.056 CC test/nvme/startup/startup.o 00:03:25.056 LINK scheduler 00:03:25.315 CC test/nvme/reserve/reserve.o 00:03:25.315 LINK err_injection 00:03:25.315 CXX test/cpp_headers/blob.o 00:03:25.315 LINK startup 00:03:25.315 CC examples/nvme/arbitration/arbitration.o 00:03:25.315 LINK overhead 00:03:25.574 CXX test/cpp_headers/conf.o 00:03:25.574 LINK reserve 00:03:25.574 CC examples/accel/perf/accel_perf.o 00:03:25.574 CC test/nvme/simple_copy/simple_copy.o 00:03:25.574 CC examples/blob/hello_world/hello_blob.o 00:03:25.574 CC examples/nvme/hotplug/hotplug.o 00:03:25.574 CXX test/cpp_headers/config.o 00:03:25.574 LINK dif 00:03:25.835 CXX test/cpp_headers/cpuset.o 00:03:25.835 LINK arbitration 00:03:25.835 CC test/nvme/connect_stress/connect_stress.o 00:03:25.835 CC test/nvme/boot_partition/boot_partition.o 00:03:25.835 LINK simple_copy 00:03:25.835 LINK iscsi_fuzz 00:03:25.835 CXX test/cpp_headers/crc16.o 00:03:25.835 LINK hello_blob 00:03:25.835 CXX test/cpp_headers/crc32.o 00:03:25.835 LINK hotplug 00:03:25.835 CXX test/cpp_headers/crc64.o 00:03:26.095 LINK boot_partition 00:03:26.095 LINK connect_stress 00:03:26.095 CXX test/cpp_headers/dif.o 00:03:26.095 CXX test/cpp_headers/dma.o 00:03:26.095 CC examples/blob/cli/blobcli.o 00:03:26.095 LINK accel_perf 00:03:26.355 CC test/nvme/compliance/nvme_compliance.o 00:03:26.355 CC test/nvme/fused_ordering/fused_ordering.o 00:03:26.355 CC test/blobfs/mkfs/mkfs.o 00:03:26.355 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.356 CXX test/cpp_headers/endian.o 00:03:26.356 CC test/bdev/bdevio/bdevio.o 00:03:26.356 CC test/lvol/esnap/esnap.o 00:03:26.356 CC examples/nvme/abort/abort.o 00:03:26.356 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.356 LINK cmb_copy 00:03:26.615 LINK mkfs 00:03:26.615 LINK fused_ordering 00:03:26.615 CXX test/cpp_headers/env_dpdk.o 00:03:26.615 LINK nvme_compliance 00:03:26.615 LINK pmr_persistence 00:03:26.615 CXX test/cpp_headers/env.o 00:03:26.615 LINK blobcli 00:03:26.615 CXX test/cpp_headers/event.o 00:03:26.875 LINK bdevio 00:03:26.875 CXX test/cpp_headers/fd_group.o 00:03:26.875 LINK abort 00:03:26.875 CC examples/bdev/hello_world/hello_bdev.o 00:03:26.875 CC examples/bdev/bdevperf/bdevperf.o 00:03:26.875 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:26.875 CXX test/cpp_headers/fd.o 00:03:26.875 CC test/nvme/fdp/fdp.o 00:03:26.875 CC test/nvme/cuse/cuse.o 00:03:27.134 CXX test/cpp_headers/file.o 00:03:27.134 CXX test/cpp_headers/fsdev.o 00:03:27.134 CXX test/cpp_headers/fsdev_module.o 00:03:27.134 CXX test/cpp_headers/ftl.o 00:03:27.134 LINK hello_bdev 00:03:27.134 LINK doorbell_aers 00:03:27.134 CXX test/cpp_headers/fuse_dispatcher.o 00:03:27.134 CXX test/cpp_headers/gpt_spec.o 00:03:27.134 CXX test/cpp_headers/hexlify.o 00:03:27.393 CXX test/cpp_headers/histogram_data.o 00:03:27.393 CXX test/cpp_headers/idxd.o 00:03:27.393 CXX test/cpp_headers/idxd_spec.o 00:03:27.393 LINK fdp 00:03:27.393 CXX test/cpp_headers/init.o 00:03:27.393 CXX test/cpp_headers/ioat.o 00:03:27.393 CXX test/cpp_headers/ioat_spec.o 00:03:27.393 CXX test/cpp_headers/iscsi_spec.o 00:03:27.393 CXX test/cpp_headers/json.o 00:03:27.393 CXX test/cpp_headers/jsonrpc.o 00:03:27.393 CXX test/cpp_headers/keyring.o 00:03:27.393 CXX test/cpp_headers/keyring_module.o 00:03:27.393 CXX test/cpp_headers/likely.o 00:03:27.393 CXX test/cpp_headers/log.o 00:03:27.653 CXX test/cpp_headers/lvol.o 00:03:27.653 CXX test/cpp_headers/md5.o 00:03:27.653 CXX test/cpp_headers/memory.o 00:03:27.653 CXX test/cpp_headers/mmio.o 00:03:27.653 CXX test/cpp_headers/nbd.o 00:03:27.653 CXX test/cpp_headers/net.o 00:03:27.653 CXX test/cpp_headers/notify.o 00:03:27.653 CXX test/cpp_headers/nvme.o 00:03:27.653 CXX test/cpp_headers/nvme_intel.o 00:03:27.914 CXX test/cpp_headers/nvme_ocssd.o 00:03:27.914 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:27.914 CXX test/cpp_headers/nvme_spec.o 00:03:27.914 CXX test/cpp_headers/nvme_zns.o 00:03:27.914 LINK bdevperf 00:03:27.914 CXX test/cpp_headers/nvmf_cmd.o 00:03:27.914 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:27.914 CXX test/cpp_headers/nvmf.o 00:03:27.914 CXX test/cpp_headers/nvmf_spec.o 00:03:27.914 CXX test/cpp_headers/nvmf_transport.o 00:03:27.914 CXX test/cpp_headers/opal.o 00:03:27.914 CXX test/cpp_headers/opal_spec.o 00:03:28.172 CXX test/cpp_headers/pci_ids.o 00:03:28.172 CXX test/cpp_headers/pipe.o 00:03:28.172 CXX test/cpp_headers/queue.o 00:03:28.172 CXX test/cpp_headers/reduce.o 00:03:28.172 CXX test/cpp_headers/rpc.o 00:03:28.172 CXX test/cpp_headers/scheduler.o 00:03:28.172 CXX test/cpp_headers/scsi.o 00:03:28.172 CXX test/cpp_headers/scsi_spec.o 00:03:28.172 CXX test/cpp_headers/sock.o 00:03:28.172 CXX test/cpp_headers/stdinc.o 00:03:28.431 CXX test/cpp_headers/string.o 00:03:28.431 CC examples/nvmf/nvmf/nvmf.o 00:03:28.431 CXX test/cpp_headers/thread.o 00:03:28.431 CXX test/cpp_headers/trace.o 00:03:28.431 CXX test/cpp_headers/trace_parser.o 00:03:28.431 CXX test/cpp_headers/tree.o 00:03:28.431 CXX test/cpp_headers/ublk.o 00:03:28.431 CXX test/cpp_headers/util.o 00:03:28.431 CXX test/cpp_headers/uuid.o 00:03:28.431 LINK cuse 00:03:28.431 CXX test/cpp_headers/version.o 00:03:28.431 CXX test/cpp_headers/vfio_user_pci.o 00:03:28.431 CXX test/cpp_headers/vfio_user_spec.o 00:03:28.431 CXX test/cpp_headers/vhost.o 00:03:28.431 CXX test/cpp_headers/vmd.o 00:03:28.690 CXX test/cpp_headers/xor.o 00:03:28.690 CXX test/cpp_headers/zipf.o 00:03:28.690 LINK nvmf 00:03:32.935 LINK esnap 00:03:33.505 00:03:33.505 real 1m29.093s 00:03:33.505 user 7m42.906s 00:03:33.505 sys 1m38.484s 00:03:33.505 17:20:06 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:33.505 17:20:06 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.505 ************************************ 00:03:33.505 END TEST make 00:03:33.505 ************************************ 00:03:33.505 17:20:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.505 17:20:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.505 17:20:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.505 17:20:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.505 17:20:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.505 17:20:06 -- pm/common@44 -- $ pid=5471 00:03:33.505 17:20:06 -- pm/common@50 -- $ kill -TERM 5471 00:03:33.505 17:20:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.505 17:20:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.505 17:20:06 -- pm/common@44 -- $ pid=5473 00:03:33.505 17:20:06 -- pm/common@50 -- $ kill -TERM 5473 00:03:33.505 17:20:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:33.505 17:20:06 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.505 17:20:06 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:33.505 17:20:06 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:33.505 17:20:06 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:33.765 17:20:06 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:33.765 17:20:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.765 17:20:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.765 17:20:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.765 17:20:06 -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.765 17:20:06 -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.765 17:20:06 -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.765 17:20:06 -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.765 17:20:06 -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.765 17:20:06 -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.765 17:20:06 -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.765 17:20:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.765 17:20:06 -- scripts/common.sh@344 -- # case "$op" in 00:03:33.765 17:20:06 -- scripts/common.sh@345 -- # : 1 00:03:33.765 17:20:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.765 17:20:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.765 17:20:06 -- scripts/common.sh@365 -- # decimal 1 00:03:33.765 17:20:06 -- scripts/common.sh@353 -- # local d=1 00:03:33.765 17:20:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.765 17:20:06 -- scripts/common.sh@355 -- # echo 1 00:03:33.765 17:20:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.765 17:20:06 -- scripts/common.sh@366 -- # decimal 2 00:03:33.765 17:20:06 -- scripts/common.sh@353 -- # local d=2 00:03:33.765 17:20:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.765 17:20:06 -- scripts/common.sh@355 -- # echo 2 00:03:33.765 17:20:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.765 17:20:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.765 17:20:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.765 17:20:06 -- scripts/common.sh@368 -- # return 0 00:03:33.765 17:20:06 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.765 17:20:06 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:33.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.765 --rc genhtml_branch_coverage=1 00:03:33.765 --rc genhtml_function_coverage=1 00:03:33.765 --rc genhtml_legend=1 00:03:33.765 --rc geninfo_all_blocks=1 00:03:33.765 --rc geninfo_unexecuted_blocks=1 00:03:33.765 00:03:33.765 ' 00:03:33.765 17:20:06 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:33.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.765 --rc genhtml_branch_coverage=1 00:03:33.765 --rc genhtml_function_coverage=1 00:03:33.765 --rc genhtml_legend=1 00:03:33.765 --rc geninfo_all_blocks=1 00:03:33.765 --rc geninfo_unexecuted_blocks=1 00:03:33.765 00:03:33.765 ' 00:03:33.765 17:20:06 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:33.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.765 --rc genhtml_branch_coverage=1 00:03:33.765 --rc genhtml_function_coverage=1 00:03:33.765 --rc genhtml_legend=1 00:03:33.765 --rc geninfo_all_blocks=1 00:03:33.765 --rc geninfo_unexecuted_blocks=1 00:03:33.765 00:03:33.765 ' 00:03:33.765 17:20:06 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:33.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.765 --rc genhtml_branch_coverage=1 00:03:33.765 --rc genhtml_function_coverage=1 00:03:33.765 --rc genhtml_legend=1 00:03:33.765 --rc geninfo_all_blocks=1 00:03:33.765 --rc geninfo_unexecuted_blocks=1 00:03:33.765 00:03:33.765 ' 00:03:33.765 17:20:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:33.765 17:20:06 -- nvmf/common.sh@7 -- # uname -s 00:03:33.765 17:20:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.765 17:20:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.765 17:20:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.765 17:20:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.765 17:20:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.765 17:20:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.765 17:20:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.765 17:20:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.765 17:20:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.765 17:20:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.765 17:20:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3ce91d1d-4798-4354-a4cf-cf5578cee81c 00:03:33.765 17:20:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=3ce91d1d-4798-4354-a4cf-cf5578cee81c 00:03:33.765 17:20:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.765 17:20:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.765 17:20:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:33.765 17:20:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.765 17:20:06 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.765 17:20:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:33.765 17:20:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.765 17:20:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.765 17:20:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.766 17:20:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.766 17:20:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.766 17:20:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.766 17:20:06 -- paths/export.sh@5 -- # export PATH 00:03:33.766 17:20:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.766 17:20:06 -- nvmf/common.sh@51 -- # : 0 00:03:33.766 17:20:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:33.766 17:20:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:33.766 17:20:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:33.766 17:20:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.766 17:20:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.766 17:20:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:33.766 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:33.766 17:20:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:33.766 17:20:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:33.766 17:20:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:33.766 17:20:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:33.766 17:20:07 -- spdk/autotest.sh@32 -- # uname -s 00:03:33.766 17:20:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:33.766 17:20:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:33.766 17:20:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.766 17:20:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:33.766 17:20:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.766 17:20:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:33.766 17:20:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:33.766 17:20:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:33.766 17:20:07 -- spdk/autotest.sh@48 -- # udevadm_pid=54489 00:03:33.766 17:20:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:33.766 17:20:07 -- pm/common@17 -- # local monitor 00:03:33.766 17:20:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.766 17:20:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:33.766 17:20:07 -- pm/common@21 -- # date +%s 00:03:33.766 17:20:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.766 17:20:07 -- pm/common@25 -- # sleep 1 00:03:33.766 17:20:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733592007 00:03:33.766 17:20:07 -- pm/common@21 -- # date +%s 00:03:33.766 17:20:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733592007 00:03:33.766 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733592007_collect-cpu-load.pm.log 00:03:33.766 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733592007_collect-vmstat.pm.log 00:03:35.147 17:20:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:35.147 17:20:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:35.147 17:20:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.147 17:20:08 -- common/autotest_common.sh@10 -- # set +x 00:03:35.147 17:20:08 -- spdk/autotest.sh@59 -- # create_test_list 00:03:35.147 17:20:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:35.147 17:20:08 -- common/autotest_common.sh@10 -- # set +x 00:03:35.147 17:20:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:35.147 17:20:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:35.147 17:20:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:35.147 17:20:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:35.147 17:20:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:35.147 17:20:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:35.147 17:20:08 -- common/autotest_common.sh@1457 -- # uname 00:03:35.147 17:20:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:35.147 17:20:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:35.147 17:20:08 -- common/autotest_common.sh@1477 -- # uname 00:03:35.147 17:20:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:35.147 17:20:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:35.147 17:20:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:35.147 lcov: LCOV version 1.15 00:03:35.147 17:20:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:50.043 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:50.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:08.140 17:20:38 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:08.140 17:20:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.140 17:20:38 -- common/autotest_common.sh@10 -- # set +x 00:04:08.140 17:20:38 -- spdk/autotest.sh@78 -- # rm -f 00:04:08.140 17:20:38 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.140 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:08.140 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:08.140 17:20:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:08.140 17:20:39 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:08.140 17:20:39 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:08.140 17:20:39 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:08.140 17:20:39 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:08.140 17:20:39 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:08.140 17:20:39 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:08.140 17:20:39 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:08.140 17:20:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:08.140 17:20:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:08.140 17:20:39 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:08.140 17:20:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.140 17:20:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.140 17:20:39 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:08.140 17:20:39 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:08.140 17:20:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:08.140 17:20:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:08.140 17:20:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:08.140 17:20:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:08.140 17:20:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.140 17:20:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:08.140 17:20:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:08.140 17:20:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:08.140 17:20:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:08.140 17:20:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.140 17:20:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:08.140 17:20:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:08.140 17:20:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:08.140 17:20:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:08.140 17:20:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.140 17:20:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:08.140 17:20:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.140 17:20:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.140 17:20:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:08.140 17:20:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:08.140 17:20:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:08.140 No valid GPT data, bailing 00:04:08.140 17:20:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.140 17:20:39 -- scripts/common.sh@394 -- # pt= 00:04:08.140 17:20:39 -- scripts/common.sh@395 -- # return 1 00:04:08.140 17:20:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:08.140 1+0 records in 00:04:08.140 1+0 records out 00:04:08.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666539 s, 157 MB/s 00:04:08.140 17:20:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.140 17:20:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.140 17:20:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:08.140 17:20:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:08.140 17:20:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:08.140 No valid GPT data, bailing 00:04:08.140 17:20:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:08.140 17:20:40 -- scripts/common.sh@394 -- # pt= 00:04:08.140 17:20:40 -- scripts/common.sh@395 -- # return 1 00:04:08.140 17:20:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:08.140 1+0 records in 00:04:08.140 1+0 records out 00:04:08.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0067868 s, 155 MB/s 00:04:08.140 17:20:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.140 17:20:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.140 17:20:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:08.140 17:20:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:08.140 17:20:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:08.140 No valid GPT data, bailing 00:04:08.140 17:20:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:08.140 17:20:40 -- scripts/common.sh@394 -- # pt= 00:04:08.140 17:20:40 -- scripts/common.sh@395 -- # return 1 00:04:08.140 17:20:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:08.140 1+0 records in 00:04:08.140 1+0 records out 00:04:08.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420505 s, 249 MB/s 00:04:08.141 17:20:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.141 17:20:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.141 17:20:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:08.141 17:20:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:08.141 17:20:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:08.141 No valid GPT data, bailing 00:04:08.141 17:20:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:08.141 17:20:40 -- scripts/common.sh@394 -- # pt= 00:04:08.141 17:20:40 -- scripts/common.sh@395 -- # return 1 00:04:08.141 17:20:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:08.141 1+0 records in 00:04:08.141 1+0 records out 00:04:08.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446493 s, 235 MB/s 00:04:08.141 17:20:40 -- spdk/autotest.sh@105 -- # sync 00:04:08.141 17:20:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.141 17:20:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.141 17:20:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:10.043 17:20:43 -- spdk/autotest.sh@111 -- # uname -s 00:04:10.043 17:20:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:10.043 17:20:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:10.043 17:20:43 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:10.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.611 Hugepages 00:04:10.611 node hugesize free / total 00:04:10.611 node0 1048576kB 0 / 0 00:04:10.611 node0 2048kB 0 / 0 00:04:10.611 00:04:10.611 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:10.869 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:10.869 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:11.128 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:11.128 17:20:44 -- spdk/autotest.sh@117 -- # uname -s 00:04:11.128 17:20:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:11.128 17:20:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:11.128 17:20:44 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.066 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.066 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.066 17:20:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:13.001 17:20:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:13.001 17:20:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:13.001 17:20:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:13.001 17:20:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:13.001 17:20:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:13.001 17:20:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:13.001 17:20:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.001 17:20:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.001 17:20:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:13.260 17:20:46 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:13.260 17:20:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:13.260 17:20:46 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.778 Waiting for block devices as requested 00:04:13.778 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:13.778 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:13.778 17:20:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:13.778 17:20:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:13.778 17:20:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:13.778 17:20:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:14.038 17:20:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:14.038 17:20:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:14.038 17:20:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:14.038 17:20:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:14.038 17:20:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:14.038 17:20:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:14.038 17:20:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:14.038 17:20:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:14.038 17:20:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:14.038 17:20:47 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:14.038 17:20:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:14.038 17:20:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:14.038 17:20:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:14.038 17:20:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:14.038 17:20:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:14.038 17:20:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:14.038 17:20:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:14.038 17:20:47 -- common/autotest_common.sh@1543 -- # continue 00:04:14.038 17:20:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:14.038 17:20:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:14.038 17:20:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:14.038 17:20:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:14.038 17:20:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:14.038 17:20:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:14.038 17:20:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:14.038 17:20:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:14.038 17:20:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:14.038 17:20:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:14.038 17:20:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:14.038 17:20:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:14.038 17:20:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:14.038 17:20:47 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:14.038 17:20:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:14.038 17:20:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:14.038 17:20:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:14.038 17:20:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:14.038 17:20:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:14.038 17:20:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:14.038 17:20:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:14.038 17:20:47 -- common/autotest_common.sh@1543 -- # continue 00:04:14.038 17:20:47 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:14.038 17:20:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.038 17:20:47 -- common/autotest_common.sh@10 -- # set +x 00:04:14.038 17:20:47 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:14.038 17:20:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.038 17:20:47 -- common/autotest_common.sh@10 -- # set +x 00:04:14.038 17:20:47 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.975 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.975 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.975 17:20:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:14.975 17:20:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.975 17:20:48 -- common/autotest_common.sh@10 -- # set +x 00:04:15.234 17:20:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:15.234 17:20:48 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:15.234 17:20:48 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:15.234 17:20:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:15.234 17:20:48 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:15.234 17:20:48 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:15.234 17:20:48 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:15.234 17:20:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:15.234 17:20:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:15.234 17:20:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:15.234 17:20:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.234 17:20:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:15.234 17:20:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:15.234 17:20:48 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:15.234 17:20:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:15.234 17:20:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.234 17:20:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:15.234 17:20:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:15.234 17:20:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.234 17:20:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.234 17:20:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:15.234 17:20:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:15.235 17:20:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.235 17:20:48 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:15.235 17:20:48 -- common/autotest_common.sh@1572 -- # return 0 00:04:15.235 17:20:48 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:15.235 17:20:48 -- common/autotest_common.sh@1580 -- # return 0 00:04:15.235 17:20:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:15.235 17:20:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:15.235 17:20:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.235 17:20:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.235 17:20:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:15.235 17:20:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.235 17:20:48 -- common/autotest_common.sh@10 -- # set +x 00:04:15.235 17:20:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:15.235 17:20:48 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:15.235 17:20:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.235 17:20:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.235 17:20:48 -- common/autotest_common.sh@10 -- # set +x 00:04:15.235 ************************************ 00:04:15.235 START TEST env 00:04:15.235 ************************************ 00:04:15.235 17:20:48 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:15.495 * Looking for test storage... 00:04:15.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:15.495 17:20:48 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.495 17:20:48 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.495 17:20:48 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.495 17:20:48 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.495 17:20:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.495 17:20:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.495 17:20:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.495 17:20:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.495 17:20:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.495 17:20:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.495 17:20:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.495 17:20:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.495 17:20:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.495 17:20:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.495 17:20:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.495 17:20:48 env -- scripts/common.sh@344 -- # case "$op" in 00:04:15.495 17:20:48 env -- scripts/common.sh@345 -- # : 1 00:04:15.495 17:20:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.495 17:20:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.495 17:20:48 env -- scripts/common.sh@365 -- # decimal 1 00:04:15.495 17:20:48 env -- scripts/common.sh@353 -- # local d=1 00:04:15.495 17:20:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.495 17:20:48 env -- scripts/common.sh@355 -- # echo 1 00:04:15.495 17:20:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.495 17:20:48 env -- scripts/common.sh@366 -- # decimal 2 00:04:15.495 17:20:48 env -- scripts/common.sh@353 -- # local d=2 00:04:15.495 17:20:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.495 17:20:48 env -- scripts/common.sh@355 -- # echo 2 00:04:15.495 17:20:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.495 17:20:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.495 17:20:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.495 17:20:48 env -- scripts/common.sh@368 -- # return 0 00:04:15.495 17:20:48 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.495 17:20:48 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.495 --rc genhtml_branch_coverage=1 00:04:15.495 --rc genhtml_function_coverage=1 00:04:15.495 --rc genhtml_legend=1 00:04:15.495 --rc geninfo_all_blocks=1 00:04:15.495 --rc geninfo_unexecuted_blocks=1 00:04:15.495 00:04:15.495 ' 00:04:15.495 17:20:48 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.495 --rc genhtml_branch_coverage=1 00:04:15.495 --rc genhtml_function_coverage=1 00:04:15.495 --rc genhtml_legend=1 00:04:15.495 --rc geninfo_all_blocks=1 00:04:15.495 --rc geninfo_unexecuted_blocks=1 00:04:15.495 00:04:15.495 ' 00:04:15.495 17:20:48 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.495 --rc genhtml_branch_coverage=1 00:04:15.495 --rc genhtml_function_coverage=1 00:04:15.495 --rc genhtml_legend=1 00:04:15.495 --rc geninfo_all_blocks=1 00:04:15.495 --rc geninfo_unexecuted_blocks=1 00:04:15.495 00:04:15.495 ' 00:04:15.496 17:20:48 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.496 --rc genhtml_branch_coverage=1 00:04:15.496 --rc genhtml_function_coverage=1 00:04:15.496 --rc genhtml_legend=1 00:04:15.496 --rc geninfo_all_blocks=1 00:04:15.496 --rc geninfo_unexecuted_blocks=1 00:04:15.496 00:04:15.496 ' 00:04:15.496 17:20:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:15.496 17:20:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.496 17:20:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.496 17:20:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.496 ************************************ 00:04:15.496 START TEST env_memory 00:04:15.496 ************************************ 00:04:15.496 17:20:48 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:15.496 00:04:15.496 00:04:15.496 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.496 http://cunit.sourceforge.net/ 00:04:15.496 00:04:15.496 00:04:15.496 Suite: memory 00:04:15.496 Test: alloc and free memory map ...[2024-12-07 17:20:48.843466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.758 passed 00:04:15.758 Test: mem map translation ...[2024-12-07 17:20:48.890711] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.758 [2024-12-07 17:20:48.890963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.758 [2024-12-07 17:20:48.891129] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.758 [2024-12-07 17:20:48.891225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.758 passed 00:04:15.758 Test: mem map registration ...[2024-12-07 17:20:48.960892] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:15.758 [2024-12-07 17:20:48.961116] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:15.758 passed 00:04:15.758 Test: mem map adjacent registrations ...passed 00:04:15.758 00:04:15.758 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.758 suites 1 1 n/a 0 0 00:04:15.758 tests 4 4 4 0 0 00:04:15.758 asserts 152 152 152 0 n/a 00:04:15.758 00:04:15.758 Elapsed time = 0.259 seconds 00:04:15.758 00:04:15.758 real 0m0.317s 00:04:15.758 user 0m0.265s 00:04:15.758 sys 0m0.043s 00:04:15.758 17:20:49 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.758 17:20:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.758 ************************************ 00:04:15.758 END TEST env_memory 00:04:15.758 ************************************ 00:04:15.758 17:20:49 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:15.758 17:20:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.758 17:20:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.758 17:20:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.018 ************************************ 00:04:16.018 START TEST env_vtophys 00:04:16.018 ************************************ 00:04:16.018 17:20:49 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:16.018 EAL: lib.eal log level changed from notice to debug 00:04:16.018 EAL: Detected lcore 0 as core 0 on socket 0 00:04:16.018 EAL: Detected lcore 1 as core 0 on socket 0 00:04:16.018 EAL: Detected lcore 2 as core 0 on socket 0 00:04:16.018 EAL: Detected lcore 3 as core 0 on socket 0 00:04:16.018 EAL: Detected lcore 4 as core 0 on socket 0 00:04:16.018 EAL: Detected lcore 5 as core 0 on socket 0 00:04:16.018 EAL: Detected lcore 6 as core 0 on socket 0 00:04:16.018 EAL: Detected lcore 7 as core 0 on socket 0 00:04:16.018 EAL: Detected lcore 8 as core 0 on socket 0 00:04:16.018 EAL: Detected lcore 9 as core 0 on socket 0 00:04:16.018 EAL: Maximum logical cores by configuration: 128 00:04:16.018 EAL: Detected CPU lcores: 10 00:04:16.018 EAL: Detected NUMA nodes: 1 00:04:16.018 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:16.018 EAL: Detected shared linkage of DPDK 00:04:16.018 EAL: No shared files mode enabled, IPC will be disabled 00:04:16.018 EAL: Selected IOVA mode 'PA' 00:04:16.018 EAL: Probing VFIO support... 00:04:16.018 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:16.018 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:16.018 EAL: Ask a virtual area of 0x2e000 bytes 00:04:16.018 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:16.018 EAL: Setting up physically contiguous memory... 00:04:16.018 EAL: Setting maximum number of open files to 524288 00:04:16.018 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:16.018 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:16.018 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.018 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:16.018 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.018 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.018 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:16.018 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:16.018 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.018 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:16.018 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.018 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.018 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:16.018 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:16.018 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.018 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:16.018 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.018 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.018 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:16.018 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:16.018 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.018 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:16.018 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.018 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.018 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:16.018 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:16.018 EAL: Hugepages will be freed exactly as allocated. 00:04:16.018 EAL: No shared files mode enabled, IPC is disabled 00:04:16.018 EAL: No shared files mode enabled, IPC is disabled 00:04:16.018 EAL: TSC frequency is ~2290000 KHz 00:04:16.018 EAL: Main lcore 0 is ready (tid=7fa17f411a40;cpuset=[0]) 00:04:16.018 EAL: Trying to obtain current memory policy. 00:04:16.018 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.018 EAL: Restoring previous memory policy: 0 00:04:16.018 EAL: request: mp_malloc_sync 00:04:16.018 EAL: No shared files mode enabled, IPC is disabled 00:04:16.018 EAL: Heap on socket 0 was expanded by 2MB 00:04:16.018 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:16.018 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:16.018 EAL: Mem event callback 'spdk:(nil)' registered 00:04:16.018 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:16.018 00:04:16.018 00:04:16.018 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.018 http://cunit.sourceforge.net/ 00:04:16.018 00:04:16.018 00:04:16.018 Suite: components_suite 00:04:16.588 Test: vtophys_malloc_test ...passed 00:04:16.588 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:16.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.588 EAL: Restoring previous memory policy: 4 00:04:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.588 EAL: request: mp_malloc_sync 00:04:16.588 EAL: No shared files mode enabled, IPC is disabled 00:04:16.588 EAL: Heap on socket 0 was expanded by 4MB 00:04:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.588 EAL: request: mp_malloc_sync 00:04:16.588 EAL: No shared files mode enabled, IPC is disabled 00:04:16.588 EAL: Heap on socket 0 was shrunk by 4MB 00:04:16.588 EAL: Trying to obtain current memory policy. 00:04:16.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.588 EAL: Restoring previous memory policy: 4 00:04:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.588 EAL: request: mp_malloc_sync 00:04:16.588 EAL: No shared files mode enabled, IPC is disabled 00:04:16.588 EAL: Heap on socket 0 was expanded by 6MB 00:04:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.588 EAL: request: mp_malloc_sync 00:04:16.588 EAL: No shared files mode enabled, IPC is disabled 00:04:16.588 EAL: Heap on socket 0 was shrunk by 6MB 00:04:16.588 EAL: Trying to obtain current memory policy. 00:04:16.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.588 EAL: Restoring previous memory policy: 4 00:04:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.588 EAL: request: mp_malloc_sync 00:04:16.588 EAL: No shared files mode enabled, IPC is disabled 00:04:16.588 EAL: Heap on socket 0 was expanded by 10MB 00:04:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.588 EAL: request: mp_malloc_sync 00:04:16.588 EAL: No shared files mode enabled, IPC is disabled 00:04:16.588 EAL: Heap on socket 0 was shrunk by 10MB 00:04:16.588 EAL: Trying to obtain current memory policy. 00:04:16.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.588 EAL: Restoring previous memory policy: 4 00:04:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.588 EAL: request: mp_malloc_sync 00:04:16.588 EAL: No shared files mode enabled, IPC is disabled 00:04:16.588 EAL: Heap on socket 0 was expanded by 18MB 00:04:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.588 EAL: request: mp_malloc_sync 00:04:16.588 EAL: No shared files mode enabled, IPC is disabled 00:04:16.588 EAL: Heap on socket 0 was shrunk by 18MB 00:04:16.588 EAL: Trying to obtain current memory policy. 00:04:16.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.588 EAL: Restoring previous memory policy: 4 00:04:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.588 EAL: request: mp_malloc_sync 00:04:16.588 EAL: No shared files mode enabled, IPC is disabled 00:04:16.588 EAL: Heap on socket 0 was expanded by 34MB 00:04:16.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.588 EAL: request: mp_malloc_sync 00:04:16.588 EAL: No shared files mode enabled, IPC is disabled 00:04:16.588 EAL: Heap on socket 0 was shrunk by 34MB 00:04:16.847 EAL: Trying to obtain current memory policy. 00:04:16.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.847 EAL: Restoring previous memory policy: 4 00:04:16.847 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.847 EAL: request: mp_malloc_sync 00:04:16.847 EAL: No shared files mode enabled, IPC is disabled 00:04:16.847 EAL: Heap on socket 0 was expanded by 66MB 00:04:16.847 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.847 EAL: request: mp_malloc_sync 00:04:16.847 EAL: No shared files mode enabled, IPC is disabled 00:04:16.847 EAL: Heap on socket 0 was shrunk by 66MB 00:04:17.106 EAL: Trying to obtain current memory policy. 00:04:17.106 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.106 EAL: Restoring previous memory policy: 4 00:04:17.106 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.106 EAL: request: mp_malloc_sync 00:04:17.106 EAL: No shared files mode enabled, IPC is disabled 00:04:17.106 EAL: Heap on socket 0 was expanded by 130MB 00:04:17.364 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.364 EAL: request: mp_malloc_sync 00:04:17.364 EAL: No shared files mode enabled, IPC is disabled 00:04:17.364 EAL: Heap on socket 0 was shrunk by 130MB 00:04:17.623 EAL: Trying to obtain current memory policy. 00:04:17.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.623 EAL: Restoring previous memory policy: 4 00:04:17.623 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.623 EAL: request: mp_malloc_sync 00:04:17.623 EAL: No shared files mode enabled, IPC is disabled 00:04:17.623 EAL: Heap on socket 0 was expanded by 258MB 00:04:17.882 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.141 EAL: request: mp_malloc_sync 00:04:18.141 EAL: No shared files mode enabled, IPC is disabled 00:04:18.141 EAL: Heap on socket 0 was shrunk by 258MB 00:04:18.400 EAL: Trying to obtain current memory policy. 00:04:18.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.660 EAL: Restoring previous memory policy: 4 00:04:18.660 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.660 EAL: request: mp_malloc_sync 00:04:18.660 EAL: No shared files mode enabled, IPC is disabled 00:04:18.660 EAL: Heap on socket 0 was expanded by 514MB 00:04:19.598 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.598 EAL: request: mp_malloc_sync 00:04:19.598 EAL: No shared files mode enabled, IPC is disabled 00:04:19.598 EAL: Heap on socket 0 was shrunk by 514MB 00:04:20.548 EAL: Trying to obtain current memory policy. 00:04:20.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.807 EAL: Restoring previous memory policy: 4 00:04:20.807 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.807 EAL: request: mp_malloc_sync 00:04:20.807 EAL: No shared files mode enabled, IPC is disabled 00:04:20.807 EAL: Heap on socket 0 was expanded by 1026MB 00:04:22.713 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.713 EAL: request: mp_malloc_sync 00:04:22.713 EAL: No shared files mode enabled, IPC is disabled 00:04:22.713 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:24.622 passed 00:04:24.622 00:04:24.622 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.622 suites 1 1 n/a 0 0 00:04:24.622 tests 2 2 2 0 0 00:04:24.622 asserts 5782 5782 5782 0 n/a 00:04:24.622 00:04:24.622 Elapsed time = 8.371 seconds 00:04:24.622 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.622 EAL: request: mp_malloc_sync 00:04:24.622 EAL: No shared files mode enabled, IPC is disabled 00:04:24.622 EAL: Heap on socket 0 was shrunk by 2MB 00:04:24.622 EAL: No shared files mode enabled, IPC is disabled 00:04:24.622 EAL: No shared files mode enabled, IPC is disabled 00:04:24.622 EAL: No shared files mode enabled, IPC is disabled 00:04:24.622 00:04:24.622 real 0m8.705s 00:04:24.622 user 0m7.709s 00:04:24.622 sys 0m0.838s 00:04:24.622 17:20:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.622 17:20:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:24.622 ************************************ 00:04:24.622 END TEST env_vtophys 00:04:24.622 ************************************ 00:04:24.622 17:20:57 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.622 17:20:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.622 17:20:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.622 17:20:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.622 ************************************ 00:04:24.622 START TEST env_pci 00:04:24.622 ************************************ 00:04:24.622 17:20:57 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.622 00:04:24.622 00:04:24.622 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.622 http://cunit.sourceforge.net/ 00:04:24.622 00:04:24.622 00:04:24.622 Suite: pci 00:04:24.622 Test: pci_hook ...[2024-12-07 17:20:57.954261] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56804 has claimed it 00:04:24.622 passed 00:04:24.622 00:04:24.622 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.622 suites 1 1 n/a 0 0 00:04:24.622 tests 1 1 1 0 0 00:04:24.622 asserts 25 25 25 0 n/a 00:04:24.622 00:04:24.622 Elapsed time = 0.010 seconds 00:04:24.622 EAL: Cannot find device (10000:00:01.0) 00:04:24.622 EAL: Failed to attach device on primary process 00:04:24.881 00:04:24.881 real 0m0.106s 00:04:24.881 user 0m0.049s 00:04:24.881 sys 0m0.056s 00:04:24.881 17:20:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.881 17:20:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:24.881 ************************************ 00:04:24.881 END TEST env_pci 00:04:24.881 ************************************ 00:04:24.881 17:20:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:24.881 17:20:58 env -- env/env.sh@15 -- # uname 00:04:24.881 17:20:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:24.881 17:20:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:24.881 17:20:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.881 17:20:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:24.881 17:20:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.881 17:20:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.881 ************************************ 00:04:24.881 START TEST env_dpdk_post_init 00:04:24.881 ************************************ 00:04:24.881 17:20:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.881 EAL: Detected CPU lcores: 10 00:04:24.881 EAL: Detected NUMA nodes: 1 00:04:24.881 EAL: Detected shared linkage of DPDK 00:04:24.881 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.881 EAL: Selected IOVA mode 'PA' 00:04:25.141 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.141 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:25.141 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:25.142 Starting DPDK initialization... 00:04:25.142 Starting SPDK post initialization... 00:04:25.142 SPDK NVMe probe 00:04:25.142 Attaching to 0000:00:10.0 00:04:25.142 Attaching to 0000:00:11.0 00:04:25.142 Attached to 0000:00:10.0 00:04:25.142 Attached to 0000:00:11.0 00:04:25.142 Cleaning up... 00:04:25.142 00:04:25.142 real 0m0.295s 00:04:25.142 user 0m0.089s 00:04:25.142 sys 0m0.107s 00:04:25.142 17:20:58 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.142 17:20:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.142 ************************************ 00:04:25.142 END TEST env_dpdk_post_init 00:04:25.142 ************************************ 00:04:25.142 17:20:58 env -- env/env.sh@26 -- # uname 00:04:25.142 17:20:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:25.142 17:20:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.142 17:20:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.142 17:20:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.142 17:20:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.142 ************************************ 00:04:25.142 START TEST env_mem_callbacks 00:04:25.142 ************************************ 00:04:25.142 17:20:58 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.142 EAL: Detected CPU lcores: 10 00:04:25.142 EAL: Detected NUMA nodes: 1 00:04:25.142 EAL: Detected shared linkage of DPDK 00:04:25.431 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.431 EAL: Selected IOVA mode 'PA' 00:04:25.431 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.431 00:04:25.431 00:04:25.431 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.431 http://cunit.sourceforge.net/ 00:04:25.431 00:04:25.431 00:04:25.431 Suite: memory 00:04:25.431 Test: test ... 00:04:25.431 register 0x200000200000 2097152 00:04:25.431 malloc 3145728 00:04:25.431 register 0x200000400000 4194304 00:04:25.431 buf 0x2000004fffc0 len 3145728 PASSED 00:04:25.431 malloc 64 00:04:25.431 buf 0x2000004ffec0 len 64 PASSED 00:04:25.431 malloc 4194304 00:04:25.431 register 0x200000800000 6291456 00:04:25.431 buf 0x2000009fffc0 len 4194304 PASSED 00:04:25.431 free 0x2000004fffc0 3145728 00:04:25.431 free 0x2000004ffec0 64 00:04:25.431 unregister 0x200000400000 4194304 PASSED 00:04:25.431 free 0x2000009fffc0 4194304 00:04:25.431 unregister 0x200000800000 6291456 PASSED 00:04:25.431 malloc 8388608 00:04:25.431 register 0x200000400000 10485760 00:04:25.431 buf 0x2000005fffc0 len 8388608 PASSED 00:04:25.431 free 0x2000005fffc0 8388608 00:04:25.431 unregister 0x200000400000 10485760 PASSED 00:04:25.432 passed 00:04:25.432 00:04:25.432 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.432 suites 1 1 n/a 0 0 00:04:25.432 tests 1 1 1 0 0 00:04:25.432 asserts 15 15 15 0 n/a 00:04:25.432 00:04:25.432 Elapsed time = 0.091 seconds 00:04:25.432 00:04:25.432 real 0m0.295s 00:04:25.432 user 0m0.119s 00:04:25.432 sys 0m0.074s 00:04:25.432 17:20:58 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.432 17:20:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:25.432 ************************************ 00:04:25.432 END TEST env_mem_callbacks 00:04:25.432 ************************************ 00:04:25.703 00:04:25.703 real 0m10.307s 00:04:25.703 user 0m8.468s 00:04:25.703 sys 0m1.492s 00:04:25.703 17:20:58 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.703 17:20:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.703 ************************************ 00:04:25.703 END TEST env 00:04:25.703 ************************************ 00:04:25.703 17:20:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.703 17:20:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.703 17:20:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.703 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:04:25.703 ************************************ 00:04:25.703 START TEST rpc 00:04:25.703 ************************************ 00:04:25.703 17:20:58 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.703 * Looking for test storage... 00:04:25.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.703 17:20:58 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:25.703 17:20:59 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:25.703 17:20:59 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.004 17:20:59 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.004 17:20:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.004 17:20:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.004 17:20:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.004 17:20:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.004 17:20:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.004 17:20:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.004 17:20:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.004 17:20:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.004 17:20:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.004 17:20:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.004 17:20:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.004 17:20:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.004 17:20:59 rpc -- scripts/common.sh@345 -- # : 1 00:04:26.004 17:20:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.004 17:20:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.004 17:20:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.004 17:20:59 rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.004 17:20:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.004 17:20:59 rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.004 17:20:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.004 17:20:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.004 17:20:59 rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.004 17:20:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.004 17:20:59 rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.004 17:20:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.004 17:20:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.004 17:20:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.004 17:20:59 rpc -- scripts/common.sh@368 -- # return 0 00:04:26.004 17:20:59 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.004 17:20:59 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.004 --rc genhtml_branch_coverage=1 00:04:26.004 --rc genhtml_function_coverage=1 00:04:26.004 --rc genhtml_legend=1 00:04:26.004 --rc geninfo_all_blocks=1 00:04:26.004 --rc geninfo_unexecuted_blocks=1 00:04:26.004 00:04:26.005 ' 00:04:26.005 17:20:59 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.005 --rc genhtml_branch_coverage=1 00:04:26.005 --rc genhtml_function_coverage=1 00:04:26.005 --rc genhtml_legend=1 00:04:26.005 --rc geninfo_all_blocks=1 00:04:26.005 --rc geninfo_unexecuted_blocks=1 00:04:26.005 00:04:26.005 ' 00:04:26.005 17:20:59 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.005 --rc genhtml_branch_coverage=1 00:04:26.005 --rc genhtml_function_coverage=1 00:04:26.005 --rc genhtml_legend=1 00:04:26.005 --rc geninfo_all_blocks=1 00:04:26.005 --rc geninfo_unexecuted_blocks=1 00:04:26.005 00:04:26.005 ' 00:04:26.005 17:20:59 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.005 --rc genhtml_branch_coverage=1 00:04:26.005 --rc genhtml_function_coverage=1 00:04:26.005 --rc genhtml_legend=1 00:04:26.005 --rc geninfo_all_blocks=1 00:04:26.005 --rc geninfo_unexecuted_blocks=1 00:04:26.005 00:04:26.005 ' 00:04:26.005 17:20:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56931 00:04:26.005 17:20:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.005 17:20:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:26.005 17:20:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56931 00:04:26.005 17:20:59 rpc -- common/autotest_common.sh@835 -- # '[' -z 56931 ']' 00:04:26.005 17:20:59 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.005 17:20:59 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.005 17:20:59 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.005 17:20:59 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.005 17:20:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.005 [2024-12-07 17:20:59.221862] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:26.005 [2024-12-07 17:20:59.222014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56931 ] 00:04:26.264 [2024-12-07 17:20:59.400979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.264 [2024-12-07 17:20:59.523783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:26.264 [2024-12-07 17:20:59.523848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56931' to capture a snapshot of events at runtime. 00:04:26.264 [2024-12-07 17:20:59.523859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:26.264 [2024-12-07 17:20:59.523869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:26.264 [2024-12-07 17:20:59.523877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56931 for offline analysis/debug. 00:04:26.264 [2024-12-07 17:20:59.525052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.202 17:21:00 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.202 17:21:00 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:27.202 17:21:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.202 17:21:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.202 17:21:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.202 17:21:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.202 17:21:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.202 17:21:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.202 17:21:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.202 ************************************ 00:04:27.202 START TEST rpc_integrity 00:04:27.202 ************************************ 00:04:27.202 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:27.202 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.202 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.202 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.202 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.202 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.202 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.202 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.202 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.202 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.202 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.202 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.202 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.202 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.202 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.202 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.462 { 00:04:27.462 "name": "Malloc0", 00:04:27.462 "aliases": [ 00:04:27.462 "fdfec7da-538d-4d96-b3c0-b4d3da457bda" 00:04:27.462 ], 00:04:27.462 "product_name": "Malloc disk", 00:04:27.462 "block_size": 512, 00:04:27.462 "num_blocks": 16384, 00:04:27.462 "uuid": "fdfec7da-538d-4d96-b3c0-b4d3da457bda", 00:04:27.462 "assigned_rate_limits": { 00:04:27.462 "rw_ios_per_sec": 0, 00:04:27.462 "rw_mbytes_per_sec": 0, 00:04:27.462 "r_mbytes_per_sec": 0, 00:04:27.462 "w_mbytes_per_sec": 0 00:04:27.462 }, 00:04:27.462 "claimed": false, 00:04:27.462 "zoned": false, 00:04:27.462 "supported_io_types": { 00:04:27.462 "read": true, 00:04:27.462 "write": true, 00:04:27.462 "unmap": true, 00:04:27.462 "flush": true, 00:04:27.462 "reset": true, 00:04:27.462 "nvme_admin": false, 00:04:27.462 "nvme_io": false, 00:04:27.462 "nvme_io_md": false, 00:04:27.462 "write_zeroes": true, 00:04:27.462 "zcopy": true, 00:04:27.462 "get_zone_info": false, 00:04:27.462 "zone_management": false, 00:04:27.462 "zone_append": false, 00:04:27.462 "compare": false, 00:04:27.462 "compare_and_write": false, 00:04:27.462 "abort": true, 00:04:27.462 "seek_hole": false, 00:04:27.462 "seek_data": false, 00:04:27.462 "copy": true, 00:04:27.462 "nvme_iov_md": false 00:04:27.462 }, 00:04:27.462 "memory_domains": [ 00:04:27.462 { 00:04:27.462 "dma_device_id": "system", 00:04:27.462 "dma_device_type": 1 00:04:27.462 }, 00:04:27.462 { 00:04:27.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.462 "dma_device_type": 2 00:04:27.462 } 00:04:27.462 ], 00:04:27.462 "driver_specific": {} 00:04:27.462 } 00:04:27.462 ]' 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.462 [2024-12-07 17:21:00.646881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.462 [2024-12-07 17:21:00.646996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.462 [2024-12-07 17:21:00.647036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:27.462 [2024-12-07 17:21:00.647058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.462 [2024-12-07 17:21:00.649674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.462 [2024-12-07 17:21:00.649732] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.462 Passthru0 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.462 { 00:04:27.462 "name": "Malloc0", 00:04:27.462 "aliases": [ 00:04:27.462 "fdfec7da-538d-4d96-b3c0-b4d3da457bda" 00:04:27.462 ], 00:04:27.462 "product_name": "Malloc disk", 00:04:27.462 "block_size": 512, 00:04:27.462 "num_blocks": 16384, 00:04:27.462 "uuid": "fdfec7da-538d-4d96-b3c0-b4d3da457bda", 00:04:27.462 "assigned_rate_limits": { 00:04:27.462 "rw_ios_per_sec": 0, 00:04:27.462 "rw_mbytes_per_sec": 0, 00:04:27.462 "r_mbytes_per_sec": 0, 00:04:27.462 "w_mbytes_per_sec": 0 00:04:27.462 }, 00:04:27.462 "claimed": true, 00:04:27.462 "claim_type": "exclusive_write", 00:04:27.462 "zoned": false, 00:04:27.462 "supported_io_types": { 00:04:27.462 "read": true, 00:04:27.462 "write": true, 00:04:27.462 "unmap": true, 00:04:27.462 "flush": true, 00:04:27.462 "reset": true, 00:04:27.462 "nvme_admin": false, 00:04:27.462 "nvme_io": false, 00:04:27.462 "nvme_io_md": false, 00:04:27.462 "write_zeroes": true, 00:04:27.462 "zcopy": true, 00:04:27.462 "get_zone_info": false, 00:04:27.462 "zone_management": false, 00:04:27.462 "zone_append": false, 00:04:27.462 "compare": false, 00:04:27.462 "compare_and_write": false, 00:04:27.462 "abort": true, 00:04:27.462 "seek_hole": false, 00:04:27.462 "seek_data": false, 00:04:27.462 "copy": true, 00:04:27.462 "nvme_iov_md": false 00:04:27.462 }, 00:04:27.462 "memory_domains": [ 00:04:27.462 { 00:04:27.462 "dma_device_id": "system", 00:04:27.462 "dma_device_type": 1 00:04:27.462 }, 00:04:27.462 { 00:04:27.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.462 "dma_device_type": 2 00:04:27.462 } 00:04:27.462 ], 00:04:27.462 "driver_specific": {} 00:04:27.462 }, 00:04:27.462 { 00:04:27.462 "name": "Passthru0", 00:04:27.462 "aliases": [ 00:04:27.462 "e9436488-77ac-57fa-8272-a548b51439c9" 00:04:27.462 ], 00:04:27.462 "product_name": "passthru", 00:04:27.462 "block_size": 512, 00:04:27.462 "num_blocks": 16384, 00:04:27.462 "uuid": "e9436488-77ac-57fa-8272-a548b51439c9", 00:04:27.462 "assigned_rate_limits": { 00:04:27.462 "rw_ios_per_sec": 0, 00:04:27.462 "rw_mbytes_per_sec": 0, 00:04:27.462 "r_mbytes_per_sec": 0, 00:04:27.462 "w_mbytes_per_sec": 0 00:04:27.462 }, 00:04:27.462 "claimed": false, 00:04:27.462 "zoned": false, 00:04:27.462 "supported_io_types": { 00:04:27.462 "read": true, 00:04:27.462 "write": true, 00:04:27.462 "unmap": true, 00:04:27.462 "flush": true, 00:04:27.462 "reset": true, 00:04:27.462 "nvme_admin": false, 00:04:27.462 "nvme_io": false, 00:04:27.462 "nvme_io_md": false, 00:04:27.462 "write_zeroes": true, 00:04:27.462 "zcopy": true, 00:04:27.462 "get_zone_info": false, 00:04:27.462 "zone_management": false, 00:04:27.462 "zone_append": false, 00:04:27.462 "compare": false, 00:04:27.462 "compare_and_write": false, 00:04:27.462 "abort": true, 00:04:27.462 "seek_hole": false, 00:04:27.462 "seek_data": false, 00:04:27.462 "copy": true, 00:04:27.462 "nvme_iov_md": false 00:04:27.462 }, 00:04:27.462 "memory_domains": [ 00:04:27.462 { 00:04:27.462 "dma_device_id": "system", 00:04:27.462 "dma_device_type": 1 00:04:27.462 }, 00:04:27.462 { 00:04:27.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.462 "dma_device_type": 2 00:04:27.462 } 00:04:27.462 ], 00:04:27.462 "driver_specific": { 00:04:27.462 "passthru": { 00:04:27.462 "name": "Passthru0", 00:04:27.462 "base_bdev_name": "Malloc0" 00:04:27.462 } 00:04:27.462 } 00:04:27.462 } 00:04:27.462 ]' 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.462 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.462 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.463 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.463 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.463 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.463 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.723 ************************************ 00:04:27.723 END TEST rpc_integrity 00:04:27.723 ************************************ 00:04:27.723 17:21:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.723 00:04:27.723 real 0m0.363s 00:04:27.723 user 0m0.188s 00:04:27.723 sys 0m0.063s 00:04:27.723 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.723 17:21:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.723 17:21:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:27.723 17:21:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.723 17:21:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.723 17:21:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.723 ************************************ 00:04:27.723 START TEST rpc_plugins 00:04:27.723 ************************************ 00:04:27.723 17:21:00 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:27.723 17:21:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.723 17:21:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.723 17:21:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.723 17:21:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.723 17:21:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.723 17:21:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.723 17:21:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.723 17:21:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.723 17:21:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.723 17:21:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.723 { 00:04:27.723 "name": "Malloc1", 00:04:27.723 "aliases": [ 00:04:27.723 "5e4d55a9-33e2-46ca-84af-082f9b3f8b7f" 00:04:27.723 ], 00:04:27.723 "product_name": "Malloc disk", 00:04:27.723 "block_size": 4096, 00:04:27.723 "num_blocks": 256, 00:04:27.723 "uuid": "5e4d55a9-33e2-46ca-84af-082f9b3f8b7f", 00:04:27.723 "assigned_rate_limits": { 00:04:27.723 "rw_ios_per_sec": 0, 00:04:27.723 "rw_mbytes_per_sec": 0, 00:04:27.723 "r_mbytes_per_sec": 0, 00:04:27.723 "w_mbytes_per_sec": 0 00:04:27.723 }, 00:04:27.723 "claimed": false, 00:04:27.723 "zoned": false, 00:04:27.723 "supported_io_types": { 00:04:27.723 "read": true, 00:04:27.723 "write": true, 00:04:27.723 "unmap": true, 00:04:27.723 "flush": true, 00:04:27.723 "reset": true, 00:04:27.723 "nvme_admin": false, 00:04:27.723 "nvme_io": false, 00:04:27.723 "nvme_io_md": false, 00:04:27.723 "write_zeroes": true, 00:04:27.723 "zcopy": true, 00:04:27.723 "get_zone_info": false, 00:04:27.723 "zone_management": false, 00:04:27.723 "zone_append": false, 00:04:27.723 "compare": false, 00:04:27.723 "compare_and_write": false, 00:04:27.723 "abort": true, 00:04:27.723 "seek_hole": false, 00:04:27.723 "seek_data": false, 00:04:27.723 "copy": true, 00:04:27.723 "nvme_iov_md": false 00:04:27.723 }, 00:04:27.723 "memory_domains": [ 00:04:27.723 { 00:04:27.723 "dma_device_id": "system", 00:04:27.723 "dma_device_type": 1 00:04:27.723 }, 00:04:27.723 { 00:04:27.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.723 "dma_device_type": 2 00:04:27.723 } 00:04:27.723 ], 00:04:27.723 "driver_specific": {} 00:04:27.723 } 00:04:27.723 ]' 00:04:27.723 17:21:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:27.723 17:21:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.723 17:21:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.723 17:21:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.723 17:21:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.723 17:21:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.723 17:21:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.723 17:21:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.723 17:21:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.723 17:21:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.723 17:21:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.723 17:21:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.723 17:21:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.723 00:04:27.723 real 0m0.178s 00:04:27.723 user 0m0.101s 00:04:27.723 sys 0m0.031s 00:04:27.723 ************************************ 00:04:27.723 END TEST rpc_plugins 00:04:27.723 ************************************ 00:04:27.723 17:21:01 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.723 17:21:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.982 17:21:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.982 17:21:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.982 17:21:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.982 17:21:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.982 ************************************ 00:04:27.982 START TEST rpc_trace_cmd_test 00:04:27.982 ************************************ 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.982 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56931", 00:04:27.982 "tpoint_group_mask": "0x8", 00:04:27.982 "iscsi_conn": { 00:04:27.982 "mask": "0x2", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "scsi": { 00:04:27.982 "mask": "0x4", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "bdev": { 00:04:27.982 "mask": "0x8", 00:04:27.982 "tpoint_mask": "0xffffffffffffffff" 00:04:27.982 }, 00:04:27.982 "nvmf_rdma": { 00:04:27.982 "mask": "0x10", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "nvmf_tcp": { 00:04:27.982 "mask": "0x20", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "ftl": { 00:04:27.982 "mask": "0x40", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "blobfs": { 00:04:27.982 "mask": "0x80", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "dsa": { 00:04:27.982 "mask": "0x200", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "thread": { 00:04:27.982 "mask": "0x400", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "nvme_pcie": { 00:04:27.982 "mask": "0x800", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "iaa": { 00:04:27.982 "mask": "0x1000", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "nvme_tcp": { 00:04:27.982 "mask": "0x2000", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "bdev_nvme": { 00:04:27.982 "mask": "0x4000", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "sock": { 00:04:27.982 "mask": "0x8000", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "blob": { 00:04:27.982 "mask": "0x10000", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "bdev_raid": { 00:04:27.982 "mask": "0x20000", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 }, 00:04:27.982 "scheduler": { 00:04:27.982 "mask": "0x40000", 00:04:27.982 "tpoint_mask": "0x0" 00:04:27.982 } 00:04:27.982 }' 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.982 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:28.241 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:28.241 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:28.241 ************************************ 00:04:28.241 END TEST rpc_trace_cmd_test 00:04:28.241 ************************************ 00:04:28.241 17:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:28.241 00:04:28.241 real 0m0.242s 00:04:28.241 user 0m0.188s 00:04:28.241 sys 0m0.044s 00:04:28.241 17:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.241 17:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.241 17:21:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.241 17:21:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.241 17:21:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.241 17:21:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.241 17:21:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.241 17:21:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.241 ************************************ 00:04:28.241 START TEST rpc_daemon_integrity 00:04:28.241 ************************************ 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.241 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.241 { 00:04:28.241 "name": "Malloc2", 00:04:28.241 "aliases": [ 00:04:28.241 "120cae4f-5fa6-4ae6-a22b-34232acdd2f8" 00:04:28.241 ], 00:04:28.241 "product_name": "Malloc disk", 00:04:28.241 "block_size": 512, 00:04:28.241 "num_blocks": 16384, 00:04:28.241 "uuid": "120cae4f-5fa6-4ae6-a22b-34232acdd2f8", 00:04:28.241 "assigned_rate_limits": { 00:04:28.242 "rw_ios_per_sec": 0, 00:04:28.242 "rw_mbytes_per_sec": 0, 00:04:28.242 "r_mbytes_per_sec": 0, 00:04:28.242 "w_mbytes_per_sec": 0 00:04:28.242 }, 00:04:28.242 "claimed": false, 00:04:28.242 "zoned": false, 00:04:28.242 "supported_io_types": { 00:04:28.242 "read": true, 00:04:28.242 "write": true, 00:04:28.242 "unmap": true, 00:04:28.242 "flush": true, 00:04:28.242 "reset": true, 00:04:28.242 "nvme_admin": false, 00:04:28.242 "nvme_io": false, 00:04:28.242 "nvme_io_md": false, 00:04:28.242 "write_zeroes": true, 00:04:28.242 "zcopy": true, 00:04:28.242 "get_zone_info": false, 00:04:28.242 "zone_management": false, 00:04:28.242 "zone_append": false, 00:04:28.242 "compare": false, 00:04:28.242 "compare_and_write": false, 00:04:28.242 "abort": true, 00:04:28.242 "seek_hole": false, 00:04:28.242 "seek_data": false, 00:04:28.242 "copy": true, 00:04:28.242 "nvme_iov_md": false 00:04:28.242 }, 00:04:28.242 "memory_domains": [ 00:04:28.242 { 00:04:28.242 "dma_device_id": "system", 00:04:28.242 "dma_device_type": 1 00:04:28.242 }, 00:04:28.242 { 00:04:28.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.242 "dma_device_type": 2 00:04:28.242 } 00:04:28.242 ], 00:04:28.242 "driver_specific": {} 00:04:28.242 } 00:04:28.242 ]' 00:04:28.242 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.500 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.500 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:28.500 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.500 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.500 [2024-12-07 17:21:01.642590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:28.500 [2024-12-07 17:21:01.642738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.500 [2024-12-07 17:21:01.642770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:28.500 [2024-12-07 17:21:01.642782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.500 [2024-12-07 17:21:01.645178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.500 [2024-12-07 17:21:01.645227] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.500 Passthru0 00:04:28.500 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.500 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.500 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.500 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.500 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.500 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.500 { 00:04:28.500 "name": "Malloc2", 00:04:28.500 "aliases": [ 00:04:28.500 "120cae4f-5fa6-4ae6-a22b-34232acdd2f8" 00:04:28.500 ], 00:04:28.500 "product_name": "Malloc disk", 00:04:28.500 "block_size": 512, 00:04:28.500 "num_blocks": 16384, 00:04:28.500 "uuid": "120cae4f-5fa6-4ae6-a22b-34232acdd2f8", 00:04:28.500 "assigned_rate_limits": { 00:04:28.500 "rw_ios_per_sec": 0, 00:04:28.500 "rw_mbytes_per_sec": 0, 00:04:28.500 "r_mbytes_per_sec": 0, 00:04:28.500 "w_mbytes_per_sec": 0 00:04:28.500 }, 00:04:28.500 "claimed": true, 00:04:28.500 "claim_type": "exclusive_write", 00:04:28.500 "zoned": false, 00:04:28.500 "supported_io_types": { 00:04:28.500 "read": true, 00:04:28.500 "write": true, 00:04:28.500 "unmap": true, 00:04:28.500 "flush": true, 00:04:28.500 "reset": true, 00:04:28.500 "nvme_admin": false, 00:04:28.500 "nvme_io": false, 00:04:28.500 "nvme_io_md": false, 00:04:28.500 "write_zeroes": true, 00:04:28.500 "zcopy": true, 00:04:28.500 "get_zone_info": false, 00:04:28.500 "zone_management": false, 00:04:28.500 "zone_append": false, 00:04:28.500 "compare": false, 00:04:28.500 "compare_and_write": false, 00:04:28.500 "abort": true, 00:04:28.500 "seek_hole": false, 00:04:28.500 "seek_data": false, 00:04:28.500 "copy": true, 00:04:28.500 "nvme_iov_md": false 00:04:28.500 }, 00:04:28.500 "memory_domains": [ 00:04:28.500 { 00:04:28.500 "dma_device_id": "system", 00:04:28.500 "dma_device_type": 1 00:04:28.500 }, 00:04:28.500 { 00:04:28.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.500 "dma_device_type": 2 00:04:28.500 } 00:04:28.500 ], 00:04:28.500 "driver_specific": {} 00:04:28.500 }, 00:04:28.500 { 00:04:28.500 "name": "Passthru0", 00:04:28.500 "aliases": [ 00:04:28.500 "4f428480-7ee1-531c-9828-21f81de4ab5e" 00:04:28.500 ], 00:04:28.500 "product_name": "passthru", 00:04:28.500 "block_size": 512, 00:04:28.500 "num_blocks": 16384, 00:04:28.500 "uuid": "4f428480-7ee1-531c-9828-21f81de4ab5e", 00:04:28.500 "assigned_rate_limits": { 00:04:28.500 "rw_ios_per_sec": 0, 00:04:28.500 "rw_mbytes_per_sec": 0, 00:04:28.500 "r_mbytes_per_sec": 0, 00:04:28.500 "w_mbytes_per_sec": 0 00:04:28.500 }, 00:04:28.500 "claimed": false, 00:04:28.500 "zoned": false, 00:04:28.500 "supported_io_types": { 00:04:28.500 "read": true, 00:04:28.500 "write": true, 00:04:28.500 "unmap": true, 00:04:28.500 "flush": true, 00:04:28.500 "reset": true, 00:04:28.500 "nvme_admin": false, 00:04:28.500 "nvme_io": false, 00:04:28.500 "nvme_io_md": false, 00:04:28.500 "write_zeroes": true, 00:04:28.500 "zcopy": true, 00:04:28.500 "get_zone_info": false, 00:04:28.500 "zone_management": false, 00:04:28.500 "zone_append": false, 00:04:28.500 "compare": false, 00:04:28.500 "compare_and_write": false, 00:04:28.500 "abort": true, 00:04:28.500 "seek_hole": false, 00:04:28.500 "seek_data": false, 00:04:28.500 "copy": true, 00:04:28.500 "nvme_iov_md": false 00:04:28.501 }, 00:04:28.501 "memory_domains": [ 00:04:28.501 { 00:04:28.501 "dma_device_id": "system", 00:04:28.501 "dma_device_type": 1 00:04:28.501 }, 00:04:28.501 { 00:04:28.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.501 "dma_device_type": 2 00:04:28.501 } 00:04:28.501 ], 00:04:28.501 "driver_specific": { 00:04:28.501 "passthru": { 00:04:28.501 "name": "Passthru0", 00:04:28.501 "base_bdev_name": "Malloc2" 00:04:28.501 } 00:04:28.501 } 00:04:28.501 } 00:04:28.501 ]' 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.501 ************************************ 00:04:28.501 END TEST rpc_daemon_integrity 00:04:28.501 ************************************ 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.501 00:04:28.501 real 0m0.364s 00:04:28.501 user 0m0.182s 00:04:28.501 sys 0m0.062s 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.501 17:21:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.501 17:21:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.501 17:21:01 rpc -- rpc/rpc.sh@84 -- # killprocess 56931 00:04:28.501 17:21:01 rpc -- common/autotest_common.sh@954 -- # '[' -z 56931 ']' 00:04:28.501 17:21:01 rpc -- common/autotest_common.sh@958 -- # kill -0 56931 00:04:28.760 17:21:01 rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.760 17:21:01 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.760 17:21:01 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56931 00:04:28.760 killing process with pid 56931 00:04:28.760 17:21:01 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.760 17:21:01 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.760 17:21:01 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56931' 00:04:28.760 17:21:01 rpc -- common/autotest_common.sh@973 -- # kill 56931 00:04:28.760 17:21:01 rpc -- common/autotest_common.sh@978 -- # wait 56931 00:04:31.300 00:04:31.300 real 0m5.537s 00:04:31.300 user 0m6.052s 00:04:31.300 sys 0m1.003s 00:04:31.300 ************************************ 00:04:31.300 END TEST rpc 00:04:31.300 ************************************ 00:04:31.300 17:21:04 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.300 17:21:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.300 17:21:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:31.300 17:21:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.300 17:21:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.300 17:21:04 -- common/autotest_common.sh@10 -- # set +x 00:04:31.300 ************************************ 00:04:31.300 START TEST skip_rpc 00:04:31.300 ************************************ 00:04:31.300 17:21:04 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:31.300 * Looking for test storage... 00:04:31.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.300 17:21:04 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:31.300 17:21:04 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:31.300 17:21:04 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:31.560 17:21:04 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:31.560 17:21:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.561 17:21:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:31.561 17:21:04 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.561 17:21:04 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:31.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.561 --rc genhtml_branch_coverage=1 00:04:31.561 --rc genhtml_function_coverage=1 00:04:31.561 --rc genhtml_legend=1 00:04:31.561 --rc geninfo_all_blocks=1 00:04:31.561 --rc geninfo_unexecuted_blocks=1 00:04:31.561 00:04:31.561 ' 00:04:31.561 17:21:04 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:31.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.561 --rc genhtml_branch_coverage=1 00:04:31.561 --rc genhtml_function_coverage=1 00:04:31.561 --rc genhtml_legend=1 00:04:31.561 --rc geninfo_all_blocks=1 00:04:31.561 --rc geninfo_unexecuted_blocks=1 00:04:31.561 00:04:31.561 ' 00:04:31.561 17:21:04 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:31.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.561 --rc genhtml_branch_coverage=1 00:04:31.561 --rc genhtml_function_coverage=1 00:04:31.561 --rc genhtml_legend=1 00:04:31.561 --rc geninfo_all_blocks=1 00:04:31.561 --rc geninfo_unexecuted_blocks=1 00:04:31.561 00:04:31.561 ' 00:04:31.561 17:21:04 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:31.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.561 --rc genhtml_branch_coverage=1 00:04:31.561 --rc genhtml_function_coverage=1 00:04:31.561 --rc genhtml_legend=1 00:04:31.561 --rc geninfo_all_blocks=1 00:04:31.561 --rc geninfo_unexecuted_blocks=1 00:04:31.561 00:04:31.561 ' 00:04:31.561 17:21:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.561 17:21:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:31.561 17:21:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:31.561 17:21:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.561 17:21:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.561 17:21:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.561 ************************************ 00:04:31.561 START TEST skip_rpc 00:04:31.561 ************************************ 00:04:31.561 17:21:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:31.561 17:21:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57160 00:04:31.561 17:21:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:31.561 17:21:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.561 17:21:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:31.561 [2024-12-07 17:21:04.828137] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:31.561 [2024-12-07 17:21:04.828375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57160 ] 00:04:31.820 [2024-12-07 17:21:05.002963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.820 [2024-12-07 17:21:05.118448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57160 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57160 ']' 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57160 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57160 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57160' 00:04:37.095 killing process with pid 57160 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57160 00:04:37.095 17:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57160 00:04:38.999 00:04:38.999 real 0m7.461s 00:04:38.999 user 0m7.000s 00:04:38.999 sys 0m0.380s 00:04:38.999 17:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.999 17:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.999 ************************************ 00:04:38.999 END TEST skip_rpc 00:04:38.999 ************************************ 00:04:38.999 17:21:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:38.999 17:21:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.999 17:21:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.999 17:21:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.999 ************************************ 00:04:38.999 START TEST skip_rpc_with_json 00:04:38.999 ************************************ 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:38.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57275 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57275 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57275 ']' 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.999 17:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.999 [2024-12-07 17:21:12.365572] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:38.999 [2024-12-07 17:21:12.365847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57275 ] 00:04:39.261 [2024-12-07 17:21:12.550928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.520 [2024-12-07 17:21:12.668883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.457 [2024-12-07 17:21:13.570539] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:40.457 request: 00:04:40.457 { 00:04:40.457 "trtype": "tcp", 00:04:40.457 "method": "nvmf_get_transports", 00:04:40.457 "req_id": 1 00:04:40.457 } 00:04:40.457 Got JSON-RPC error response 00:04:40.457 response: 00:04:40.457 { 00:04:40.457 "code": -19, 00:04:40.457 "message": "No such device" 00:04:40.457 } 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.457 [2024-12-07 17:21:13.582684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.457 17:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.457 { 00:04:40.457 "subsystems": [ 00:04:40.457 { 00:04:40.457 "subsystem": "fsdev", 00:04:40.457 "config": [ 00:04:40.457 { 00:04:40.457 "method": "fsdev_set_opts", 00:04:40.457 "params": { 00:04:40.457 "fsdev_io_pool_size": 65535, 00:04:40.457 "fsdev_io_cache_size": 256 00:04:40.457 } 00:04:40.457 } 00:04:40.457 ] 00:04:40.457 }, 00:04:40.457 { 00:04:40.457 "subsystem": "keyring", 00:04:40.457 "config": [] 00:04:40.457 }, 00:04:40.457 { 00:04:40.457 "subsystem": "iobuf", 00:04:40.457 "config": [ 00:04:40.457 { 00:04:40.457 "method": "iobuf_set_options", 00:04:40.457 "params": { 00:04:40.457 "small_pool_count": 8192, 00:04:40.457 "large_pool_count": 1024, 00:04:40.457 "small_bufsize": 8192, 00:04:40.457 "large_bufsize": 135168, 00:04:40.457 "enable_numa": false 00:04:40.457 } 00:04:40.457 } 00:04:40.457 ] 00:04:40.457 }, 00:04:40.457 { 00:04:40.457 "subsystem": "sock", 00:04:40.457 "config": [ 00:04:40.457 { 00:04:40.457 "method": "sock_set_default_impl", 00:04:40.457 "params": { 00:04:40.457 "impl_name": "posix" 00:04:40.457 } 00:04:40.457 }, 00:04:40.457 { 00:04:40.457 "method": "sock_impl_set_options", 00:04:40.457 "params": { 00:04:40.457 "impl_name": "ssl", 00:04:40.457 "recv_buf_size": 4096, 00:04:40.457 "send_buf_size": 4096, 00:04:40.457 "enable_recv_pipe": true, 00:04:40.457 "enable_quickack": false, 00:04:40.457 "enable_placement_id": 0, 00:04:40.457 "enable_zerocopy_send_server": true, 00:04:40.457 "enable_zerocopy_send_client": false, 00:04:40.457 "zerocopy_threshold": 0, 00:04:40.457 "tls_version": 0, 00:04:40.457 "enable_ktls": false 00:04:40.457 } 00:04:40.457 }, 00:04:40.457 { 00:04:40.457 "method": "sock_impl_set_options", 00:04:40.457 "params": { 00:04:40.457 "impl_name": "posix", 00:04:40.457 "recv_buf_size": 2097152, 00:04:40.457 "send_buf_size": 2097152, 00:04:40.458 "enable_recv_pipe": true, 00:04:40.458 "enable_quickack": false, 00:04:40.458 "enable_placement_id": 0, 00:04:40.458 "enable_zerocopy_send_server": true, 00:04:40.458 "enable_zerocopy_send_client": false, 00:04:40.458 "zerocopy_threshold": 0, 00:04:40.458 "tls_version": 0, 00:04:40.458 "enable_ktls": false 00:04:40.458 } 00:04:40.458 } 00:04:40.458 ] 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "subsystem": "vmd", 00:04:40.458 "config": [] 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "subsystem": "accel", 00:04:40.458 "config": [ 00:04:40.458 { 00:04:40.458 "method": "accel_set_options", 00:04:40.458 "params": { 00:04:40.458 "small_cache_size": 128, 00:04:40.458 "large_cache_size": 16, 00:04:40.458 "task_count": 2048, 00:04:40.458 "sequence_count": 2048, 00:04:40.458 "buf_count": 2048 00:04:40.458 } 00:04:40.458 } 00:04:40.458 ] 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "subsystem": "bdev", 00:04:40.458 "config": [ 00:04:40.458 { 00:04:40.458 "method": "bdev_set_options", 00:04:40.458 "params": { 00:04:40.458 "bdev_io_pool_size": 65535, 00:04:40.458 "bdev_io_cache_size": 256, 00:04:40.458 "bdev_auto_examine": true, 00:04:40.458 "iobuf_small_cache_size": 128, 00:04:40.458 "iobuf_large_cache_size": 16 00:04:40.458 } 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "method": "bdev_raid_set_options", 00:04:40.458 "params": { 00:04:40.458 "process_window_size_kb": 1024, 00:04:40.458 "process_max_bandwidth_mb_sec": 0 00:04:40.458 } 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "method": "bdev_iscsi_set_options", 00:04:40.458 "params": { 00:04:40.458 "timeout_sec": 30 00:04:40.458 } 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "method": "bdev_nvme_set_options", 00:04:40.458 "params": { 00:04:40.458 "action_on_timeout": "none", 00:04:40.458 "timeout_us": 0, 00:04:40.458 "timeout_admin_us": 0, 00:04:40.458 "keep_alive_timeout_ms": 10000, 00:04:40.458 "arbitration_burst": 0, 00:04:40.458 "low_priority_weight": 0, 00:04:40.458 "medium_priority_weight": 0, 00:04:40.458 "high_priority_weight": 0, 00:04:40.458 "nvme_adminq_poll_period_us": 10000, 00:04:40.458 "nvme_ioq_poll_period_us": 0, 00:04:40.458 "io_queue_requests": 0, 00:04:40.458 "delay_cmd_submit": true, 00:04:40.458 "transport_retry_count": 4, 00:04:40.458 "bdev_retry_count": 3, 00:04:40.458 "transport_ack_timeout": 0, 00:04:40.458 "ctrlr_loss_timeout_sec": 0, 00:04:40.458 "reconnect_delay_sec": 0, 00:04:40.458 "fast_io_fail_timeout_sec": 0, 00:04:40.458 "disable_auto_failback": false, 00:04:40.458 "generate_uuids": false, 00:04:40.458 "transport_tos": 0, 00:04:40.458 "nvme_error_stat": false, 00:04:40.458 "rdma_srq_size": 0, 00:04:40.458 "io_path_stat": false, 00:04:40.458 "allow_accel_sequence": false, 00:04:40.458 "rdma_max_cq_size": 0, 00:04:40.458 "rdma_cm_event_timeout_ms": 0, 00:04:40.458 "dhchap_digests": [ 00:04:40.458 "sha256", 00:04:40.458 "sha384", 00:04:40.458 "sha512" 00:04:40.458 ], 00:04:40.458 "dhchap_dhgroups": [ 00:04:40.458 "null", 00:04:40.458 "ffdhe2048", 00:04:40.458 "ffdhe3072", 00:04:40.458 "ffdhe4096", 00:04:40.458 "ffdhe6144", 00:04:40.458 "ffdhe8192" 00:04:40.458 ] 00:04:40.458 } 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "method": "bdev_nvme_set_hotplug", 00:04:40.458 "params": { 00:04:40.458 "period_us": 100000, 00:04:40.458 "enable": false 00:04:40.458 } 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "method": "bdev_wait_for_examine" 00:04:40.458 } 00:04:40.458 ] 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "subsystem": "scsi", 00:04:40.458 "config": null 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "subsystem": "scheduler", 00:04:40.458 "config": [ 00:04:40.458 { 00:04:40.458 "method": "framework_set_scheduler", 00:04:40.458 "params": { 00:04:40.458 "name": "static" 00:04:40.458 } 00:04:40.458 } 00:04:40.458 ] 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "subsystem": "vhost_scsi", 00:04:40.458 "config": [] 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "subsystem": "vhost_blk", 00:04:40.458 "config": [] 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "subsystem": "ublk", 00:04:40.458 "config": [] 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "subsystem": "nbd", 00:04:40.458 "config": [] 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "subsystem": "nvmf", 00:04:40.458 "config": [ 00:04:40.458 { 00:04:40.458 "method": "nvmf_set_config", 00:04:40.458 "params": { 00:04:40.458 "discovery_filter": "match_any", 00:04:40.458 "admin_cmd_passthru": { 00:04:40.458 "identify_ctrlr": false 00:04:40.458 }, 00:04:40.458 "dhchap_digests": [ 00:04:40.458 "sha256", 00:04:40.458 "sha384", 00:04:40.458 "sha512" 00:04:40.458 ], 00:04:40.458 "dhchap_dhgroups": [ 00:04:40.458 "null", 00:04:40.458 "ffdhe2048", 00:04:40.458 "ffdhe3072", 00:04:40.458 "ffdhe4096", 00:04:40.458 "ffdhe6144", 00:04:40.458 "ffdhe8192" 00:04:40.458 ] 00:04:40.458 } 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "method": "nvmf_set_max_subsystems", 00:04:40.458 "params": { 00:04:40.458 "max_subsystems": 1024 00:04:40.458 } 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "method": "nvmf_set_crdt", 00:04:40.458 "params": { 00:04:40.458 "crdt1": 0, 00:04:40.458 "crdt2": 0, 00:04:40.458 "crdt3": 0 00:04:40.458 } 00:04:40.458 }, 00:04:40.458 { 00:04:40.458 "method": "nvmf_create_transport", 00:04:40.458 "params": { 00:04:40.458 "trtype": "TCP", 00:04:40.458 "max_queue_depth": 128, 00:04:40.458 "max_io_qpairs_per_ctrlr": 127, 00:04:40.458 "in_capsule_data_size": 4096, 00:04:40.458 "max_io_size": 131072, 00:04:40.458 "io_unit_size": 131072, 00:04:40.458 "max_aq_depth": 128, 00:04:40.459 "num_shared_buffers": 511, 00:04:40.459 "buf_cache_size": 4294967295, 00:04:40.459 "dif_insert_or_strip": false, 00:04:40.459 "zcopy": false, 00:04:40.459 "c2h_success": true, 00:04:40.459 "sock_priority": 0, 00:04:40.459 "abort_timeout_sec": 1, 00:04:40.459 "ack_timeout": 0, 00:04:40.459 "data_wr_pool_size": 0 00:04:40.459 } 00:04:40.459 } 00:04:40.459 ] 00:04:40.459 }, 00:04:40.459 { 00:04:40.459 "subsystem": "iscsi", 00:04:40.459 "config": [ 00:04:40.459 { 00:04:40.459 "method": "iscsi_set_options", 00:04:40.459 "params": { 00:04:40.459 "node_base": "iqn.2016-06.io.spdk", 00:04:40.459 "max_sessions": 128, 00:04:40.459 "max_connections_per_session": 2, 00:04:40.459 "max_queue_depth": 64, 00:04:40.459 "default_time2wait": 2, 00:04:40.459 "default_time2retain": 20, 00:04:40.459 "first_burst_length": 8192, 00:04:40.459 "immediate_data": true, 00:04:40.459 "allow_duplicated_isid": false, 00:04:40.459 "error_recovery_level": 0, 00:04:40.459 "nop_timeout": 60, 00:04:40.459 "nop_in_interval": 30, 00:04:40.459 "disable_chap": false, 00:04:40.459 "require_chap": false, 00:04:40.459 "mutual_chap": false, 00:04:40.459 "chap_group": 0, 00:04:40.459 "max_large_datain_per_connection": 64, 00:04:40.459 "max_r2t_per_connection": 4, 00:04:40.459 "pdu_pool_size": 36864, 00:04:40.459 "immediate_data_pool_size": 16384, 00:04:40.459 "data_out_pool_size": 2048 00:04:40.459 } 00:04:40.459 } 00:04:40.459 ] 00:04:40.459 } 00:04:40.459 ] 00:04:40.459 } 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57275 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57275 ']' 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57275 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57275 00:04:40.459 killing process with pid 57275 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57275' 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57275 00:04:40.459 17:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57275 00:04:42.992 17:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.992 17:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57326 00:04:42.992 17:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57326 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57326 ']' 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57326 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57326 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57326' 00:04:48.267 killing process with pid 57326 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57326 00:04:48.267 17:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57326 00:04:50.821 17:21:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:50.821 17:21:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:50.821 00:04:50.821 real 0m11.425s 00:04:50.821 user 0m10.851s 00:04:50.821 sys 0m0.881s 00:04:50.821 17:21:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.821 17:21:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.821 ************************************ 00:04:50.822 END TEST skip_rpc_with_json 00:04:50.822 ************************************ 00:04:50.822 17:21:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:50.822 17:21:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.822 17:21:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.822 17:21:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.822 ************************************ 00:04:50.822 START TEST skip_rpc_with_delay 00:04:50.822 ************************************ 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.822 [2024-12-07 17:21:23.866470] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.822 00:04:50.822 real 0m0.192s 00:04:50.822 user 0m0.102s 00:04:50.822 sys 0m0.087s 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.822 17:21:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:50.822 ************************************ 00:04:50.822 END TEST skip_rpc_with_delay 00:04:50.822 ************************************ 00:04:50.822 17:21:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:50.822 17:21:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:50.822 17:21:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:50.822 17:21:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.822 17:21:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.822 17:21:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.822 ************************************ 00:04:50.822 START TEST exit_on_failed_rpc_init 00:04:50.822 ************************************ 00:04:50.822 17:21:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:50.822 17:21:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57459 00:04:50.822 17:21:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.822 17:21:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57459 00:04:50.822 17:21:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57459 ']' 00:04:50.822 17:21:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.822 17:21:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.822 17:21:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.822 17:21:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.822 17:21:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.822 [2024-12-07 17:21:24.126409] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:50.822 [2024-12-07 17:21:24.126562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57459 ] 00:04:51.080 [2024-12-07 17:21:24.287008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.080 [2024-12-07 17:21:24.431618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:52.018 17:21:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.276 [2024-12-07 17:21:25.446081] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:52.276 [2024-12-07 17:21:25.446354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57477 ] 00:04:52.276 [2024-12-07 17:21:25.608999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.536 [2024-12-07 17:21:25.771298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.536 [2024-12-07 17:21:25.771530] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:52.536 [2024-12-07 17:21:25.771614] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:52.536 [2024-12-07 17:21:25.771659] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57459 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57459 ']' 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57459 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57459 00:04:52.796 killing process with pid 57459 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57459' 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57459 00:04:52.796 17:21:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57459 00:04:55.337 00:04:55.337 real 0m4.496s 00:04:55.337 user 0m4.877s 00:04:55.337 sys 0m0.623s 00:04:55.337 ************************************ 00:04:55.337 END TEST exit_on_failed_rpc_init 00:04:55.337 ************************************ 00:04:55.337 17:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.337 17:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.337 17:21:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:55.337 00:04:55.337 real 0m24.080s 00:04:55.337 user 0m23.022s 00:04:55.337 sys 0m2.309s 00:04:55.337 17:21:28 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.337 17:21:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.337 ************************************ 00:04:55.337 END TEST skip_rpc 00:04:55.337 ************************************ 00:04:55.337 17:21:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:55.337 17:21:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.337 17:21:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.337 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:04:55.337 ************************************ 00:04:55.337 START TEST rpc_client 00:04:55.337 ************************************ 00:04:55.337 17:21:28 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:55.598 * Looking for test storage... 00:04:55.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:55.598 17:21:28 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.598 17:21:28 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.598 17:21:28 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.598 17:21:28 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.598 17:21:28 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:55.598 17:21:28 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.598 17:21:28 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.598 --rc genhtml_branch_coverage=1 00:04:55.598 --rc genhtml_function_coverage=1 00:04:55.598 --rc genhtml_legend=1 00:04:55.598 --rc geninfo_all_blocks=1 00:04:55.598 --rc geninfo_unexecuted_blocks=1 00:04:55.598 00:04:55.598 ' 00:04:55.598 17:21:28 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.598 --rc genhtml_branch_coverage=1 00:04:55.598 --rc genhtml_function_coverage=1 00:04:55.598 --rc genhtml_legend=1 00:04:55.598 --rc geninfo_all_blocks=1 00:04:55.598 --rc geninfo_unexecuted_blocks=1 00:04:55.598 00:04:55.598 ' 00:04:55.598 17:21:28 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.598 --rc genhtml_branch_coverage=1 00:04:55.598 --rc genhtml_function_coverage=1 00:04:55.598 --rc genhtml_legend=1 00:04:55.598 --rc geninfo_all_blocks=1 00:04:55.598 --rc geninfo_unexecuted_blocks=1 00:04:55.598 00:04:55.598 ' 00:04:55.598 17:21:28 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.598 --rc genhtml_branch_coverage=1 00:04:55.598 --rc genhtml_function_coverage=1 00:04:55.598 --rc genhtml_legend=1 00:04:55.598 --rc geninfo_all_blocks=1 00:04:55.598 --rc geninfo_unexecuted_blocks=1 00:04:55.598 00:04:55.598 ' 00:04:55.599 17:21:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:55.599 OK 00:04:55.599 17:21:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:55.599 00:04:55.599 real 0m0.314s 00:04:55.599 user 0m0.175s 00:04:55.599 sys 0m0.156s 00:04:55.599 17:21:28 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.599 17:21:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:55.599 ************************************ 00:04:55.599 END TEST rpc_client 00:04:55.599 ************************************ 00:04:55.859 17:21:28 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:55.859 17:21:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.859 17:21:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.859 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:04:55.859 ************************************ 00:04:55.859 START TEST json_config 00:04:55.859 ************************************ 00:04:55.859 17:21:29 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:55.859 17:21:29 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.859 17:21:29 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.859 17:21:29 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.859 17:21:29 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.859 17:21:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.859 17:21:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.859 17:21:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.859 17:21:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.859 17:21:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.859 17:21:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.859 17:21:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.859 17:21:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.859 17:21:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.859 17:21:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.859 17:21:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.859 17:21:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:55.859 17:21:29 json_config -- scripts/common.sh@345 -- # : 1 00:04:55.859 17:21:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.859 17:21:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.859 17:21:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:55.859 17:21:29 json_config -- scripts/common.sh@353 -- # local d=1 00:04:55.859 17:21:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.859 17:21:29 json_config -- scripts/common.sh@355 -- # echo 1 00:04:55.859 17:21:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.859 17:21:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:55.859 17:21:29 json_config -- scripts/common.sh@353 -- # local d=2 00:04:55.859 17:21:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.859 17:21:29 json_config -- scripts/common.sh@355 -- # echo 2 00:04:55.859 17:21:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.859 17:21:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.859 17:21:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.859 17:21:29 json_config -- scripts/common.sh@368 -- # return 0 00:04:55.859 17:21:29 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.859 17:21:29 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.859 --rc genhtml_branch_coverage=1 00:04:55.859 --rc genhtml_function_coverage=1 00:04:55.859 --rc genhtml_legend=1 00:04:55.859 --rc geninfo_all_blocks=1 00:04:55.859 --rc geninfo_unexecuted_blocks=1 00:04:55.859 00:04:55.859 ' 00:04:55.859 17:21:29 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.859 --rc genhtml_branch_coverage=1 00:04:55.859 --rc genhtml_function_coverage=1 00:04:55.859 --rc genhtml_legend=1 00:04:55.859 --rc geninfo_all_blocks=1 00:04:55.859 --rc geninfo_unexecuted_blocks=1 00:04:55.859 00:04:55.859 ' 00:04:55.859 17:21:29 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.859 --rc genhtml_branch_coverage=1 00:04:55.859 --rc genhtml_function_coverage=1 00:04:55.859 --rc genhtml_legend=1 00:04:55.859 --rc geninfo_all_blocks=1 00:04:55.860 --rc geninfo_unexecuted_blocks=1 00:04:55.860 00:04:55.860 ' 00:04:55.860 17:21:29 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.860 --rc genhtml_branch_coverage=1 00:04:55.860 --rc genhtml_function_coverage=1 00:04:55.860 --rc genhtml_legend=1 00:04:55.860 --rc geninfo_all_blocks=1 00:04:55.860 --rc geninfo_unexecuted_blocks=1 00:04:55.860 00:04:55.860 ' 00:04:55.860 17:21:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3ce91d1d-4798-4354-a4cf-cf5578cee81c 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3ce91d1d-4798-4354-a4cf-cf5578cee81c 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.860 17:21:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.860 17:21:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.860 17:21:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.860 17:21:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.860 17:21:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.860 17:21:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.860 17:21:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.860 17:21:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.860 17:21:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@51 -- # : 0 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.860 17:21:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.860 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.119 17:21:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.119 17:21:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.119 17:21:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.119 17:21:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:56.119 17:21:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:56.119 17:21:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:56.119 17:21:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:56.119 17:21:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.119 17:21:29 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:56.119 WARNING: No tests are enabled so not running JSON configuration tests 00:04:56.119 17:21:29 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:56.119 00:04:56.119 real 0m0.239s 00:04:56.119 user 0m0.138s 00:04:56.119 sys 0m0.105s 00:04:56.119 17:21:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.119 17:21:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.119 ************************************ 00:04:56.119 END TEST json_config 00:04:56.119 ************************************ 00:04:56.119 17:21:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:56.119 17:21:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.119 17:21:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.119 17:21:29 -- common/autotest_common.sh@10 -- # set +x 00:04:56.119 ************************************ 00:04:56.119 START TEST json_config_extra_key 00:04:56.119 ************************************ 00:04:56.119 17:21:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:56.119 17:21:29 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:56.119 17:21:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:56.119 17:21:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:56.119 17:21:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.119 17:21:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:56.381 17:21:29 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.381 17:21:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:56.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.381 --rc genhtml_branch_coverage=1 00:04:56.381 --rc genhtml_function_coverage=1 00:04:56.381 --rc genhtml_legend=1 00:04:56.381 --rc geninfo_all_blocks=1 00:04:56.381 --rc geninfo_unexecuted_blocks=1 00:04:56.381 00:04:56.381 ' 00:04:56.381 17:21:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:56.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.381 --rc genhtml_branch_coverage=1 00:04:56.381 --rc genhtml_function_coverage=1 00:04:56.381 --rc genhtml_legend=1 00:04:56.381 --rc geninfo_all_blocks=1 00:04:56.381 --rc geninfo_unexecuted_blocks=1 00:04:56.381 00:04:56.381 ' 00:04:56.381 17:21:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:56.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.381 --rc genhtml_branch_coverage=1 00:04:56.381 --rc genhtml_function_coverage=1 00:04:56.381 --rc genhtml_legend=1 00:04:56.381 --rc geninfo_all_blocks=1 00:04:56.381 --rc geninfo_unexecuted_blocks=1 00:04:56.381 00:04:56.381 ' 00:04:56.381 17:21:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:56.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.381 --rc genhtml_branch_coverage=1 00:04:56.381 --rc genhtml_function_coverage=1 00:04:56.381 --rc genhtml_legend=1 00:04:56.381 --rc geninfo_all_blocks=1 00:04:56.381 --rc geninfo_unexecuted_blocks=1 00:04:56.381 00:04:56.381 ' 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3ce91d1d-4798-4354-a4cf-cf5578cee81c 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3ce91d1d-4798-4354-a4cf-cf5578cee81c 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.381 17:21:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.381 17:21:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.381 17:21:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.381 17:21:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.381 17:21:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:56.381 17:21:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:56.381 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.381 17:21:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:56.381 INFO: launching applications... 00:04:56.381 17:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57687 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.381 Waiting for target to run... 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:56.381 17:21:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57687 /var/tmp/spdk_tgt.sock 00:04:56.382 17:21:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57687 ']' 00:04:56.382 17:21:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.382 17:21:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.382 17:21:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.382 17:21:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.382 17:21:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:56.382 [2024-12-07 17:21:29.654191] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:56.382 [2024-12-07 17:21:29.654424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57687 ] 00:04:56.967 [2024-12-07 17:21:30.039969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.967 [2024-12-07 17:21:30.147517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.537 17:21:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.537 17:21:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:57.537 00:04:57.537 INFO: shutting down applications... 00:04:57.537 17:21:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:57.537 17:21:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:57.537 17:21:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:57.537 17:21:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:57.537 17:21:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:57.797 17:21:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57687 ]] 00:04:57.797 17:21:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57687 00:04:57.797 17:21:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:57.797 17:21:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.797 17:21:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57687 00:04:57.797 17:21:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.057 17:21:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.057 17:21:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.057 17:21:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57687 00:04:58.057 17:21:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.627 17:21:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.627 17:21:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.627 17:21:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57687 00:04:58.627 17:21:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.198 17:21:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.198 17:21:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.198 17:21:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57687 00:04:59.198 17:21:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.768 17:21:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.768 17:21:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.768 17:21:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57687 00:04:59.768 17:21:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.337 17:21:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.337 17:21:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.337 17:21:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57687 00:05:00.337 17:21:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.597 17:21:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.597 17:21:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.597 17:21:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57687 00:05:00.597 SPDK target shutdown done 00:05:00.597 Success 00:05:00.597 17:21:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.597 17:21:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:00.597 17:21:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.597 17:21:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.597 17:21:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:00.597 00:05:00.597 real 0m4.646s 00:05:00.597 user 0m4.004s 00:05:00.597 sys 0m0.620s 00:05:00.597 17:21:33 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.597 17:21:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:00.597 ************************************ 00:05:00.597 END TEST json_config_extra_key 00:05:00.597 ************************************ 00:05:00.857 17:21:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.857 17:21:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.857 17:21:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.857 17:21:34 -- common/autotest_common.sh@10 -- # set +x 00:05:00.857 ************************************ 00:05:00.857 START TEST alias_rpc 00:05:00.857 ************************************ 00:05:00.857 17:21:34 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.857 * Looking for test storage... 00:05:00.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:00.857 17:21:34 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.857 17:21:34 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.857 17:21:34 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.857 17:21:34 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.857 17:21:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.857 17:21:34 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.857 17:21:34 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.857 --rc genhtml_branch_coverage=1 00:05:00.857 --rc genhtml_function_coverage=1 00:05:00.857 --rc genhtml_legend=1 00:05:00.857 --rc geninfo_all_blocks=1 00:05:00.857 --rc geninfo_unexecuted_blocks=1 00:05:00.857 00:05:00.857 ' 00:05:00.857 17:21:34 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.857 --rc genhtml_branch_coverage=1 00:05:00.857 --rc genhtml_function_coverage=1 00:05:00.857 --rc genhtml_legend=1 00:05:00.858 --rc geninfo_all_blocks=1 00:05:00.858 --rc geninfo_unexecuted_blocks=1 00:05:00.858 00:05:00.858 ' 00:05:00.858 17:21:34 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.858 --rc genhtml_branch_coverage=1 00:05:00.858 --rc genhtml_function_coverage=1 00:05:00.858 --rc genhtml_legend=1 00:05:00.858 --rc geninfo_all_blocks=1 00:05:00.858 --rc geninfo_unexecuted_blocks=1 00:05:00.858 00:05:00.858 ' 00:05:00.858 17:21:34 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.858 --rc genhtml_branch_coverage=1 00:05:00.858 --rc genhtml_function_coverage=1 00:05:00.858 --rc genhtml_legend=1 00:05:00.858 --rc geninfo_all_blocks=1 00:05:00.858 --rc geninfo_unexecuted_blocks=1 00:05:00.858 00:05:00.858 ' 00:05:00.858 17:21:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:00.858 17:21:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57800 00:05:00.858 17:21:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57800 00:05:00.858 17:21:34 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57800 ']' 00:05:00.858 17:21:34 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.858 17:21:34 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.858 17:21:34 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.858 17:21:34 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.858 17:21:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.858 17:21:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.117 [2024-12-07 17:21:34.283389] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:01.117 [2024-12-07 17:21:34.283569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57800 ] 00:05:01.117 [2024-12-07 17:21:34.454742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.377 [2024-12-07 17:21:34.568447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.315 17:21:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:02.315 17:21:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57800 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57800 ']' 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57800 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57800 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57800' 00:05:02.315 killing process with pid 57800 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@973 -- # kill 57800 00:05:02.315 17:21:35 alias_rpc -- common/autotest_common.sh@978 -- # wait 57800 00:05:04.855 00:05:04.855 real 0m3.967s 00:05:04.855 user 0m4.065s 00:05:04.855 sys 0m0.539s 00:05:04.855 17:21:37 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.855 17:21:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.855 ************************************ 00:05:04.855 END TEST alias_rpc 00:05:04.855 ************************************ 00:05:04.855 17:21:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:04.855 17:21:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.855 17:21:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.855 17:21:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.855 17:21:38 -- common/autotest_common.sh@10 -- # set +x 00:05:04.855 ************************************ 00:05:04.855 START TEST spdkcli_tcp 00:05:04.855 ************************************ 00:05:04.855 17:21:38 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.855 * Looking for test storage... 00:05:04.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:04.855 17:21:38 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.855 17:21:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.855 17:21:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.116 17:21:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.116 17:21:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.117 17:21:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.117 --rc genhtml_branch_coverage=1 00:05:05.117 --rc genhtml_function_coverage=1 00:05:05.117 --rc genhtml_legend=1 00:05:05.117 --rc geninfo_all_blocks=1 00:05:05.117 --rc geninfo_unexecuted_blocks=1 00:05:05.117 00:05:05.117 ' 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.117 --rc genhtml_branch_coverage=1 00:05:05.117 --rc genhtml_function_coverage=1 00:05:05.117 --rc genhtml_legend=1 00:05:05.117 --rc geninfo_all_blocks=1 00:05:05.117 --rc geninfo_unexecuted_blocks=1 00:05:05.117 00:05:05.117 ' 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.117 --rc genhtml_branch_coverage=1 00:05:05.117 --rc genhtml_function_coverage=1 00:05:05.117 --rc genhtml_legend=1 00:05:05.117 --rc geninfo_all_blocks=1 00:05:05.117 --rc geninfo_unexecuted_blocks=1 00:05:05.117 00:05:05.117 ' 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.117 --rc genhtml_branch_coverage=1 00:05:05.117 --rc genhtml_function_coverage=1 00:05:05.117 --rc genhtml_legend=1 00:05:05.117 --rc geninfo_all_blocks=1 00:05:05.117 --rc geninfo_unexecuted_blocks=1 00:05:05.117 00:05:05.117 ' 00:05:05.117 17:21:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:05.117 17:21:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:05.117 17:21:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:05.117 17:21:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:05.117 17:21:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:05.117 17:21:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:05.117 17:21:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.117 17:21:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57906 00:05:05.117 17:21:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57906 00:05:05.117 17:21:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57906 ']' 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.117 17:21:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.117 [2024-12-07 17:21:38.387328] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:05.117 [2024-12-07 17:21:38.387607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57906 ] 00:05:05.385 [2024-12-07 17:21:38.571121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.385 [2024-12-07 17:21:38.686794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.385 [2024-12-07 17:21:38.686846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.338 17:21:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.338 17:21:39 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:06.338 17:21:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57928 00:05:06.338 17:21:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:06.338 17:21:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:06.600 [ 00:05:06.600 "bdev_malloc_delete", 00:05:06.600 "bdev_malloc_create", 00:05:06.600 "bdev_null_resize", 00:05:06.600 "bdev_null_delete", 00:05:06.600 "bdev_null_create", 00:05:06.600 "bdev_nvme_cuse_unregister", 00:05:06.600 "bdev_nvme_cuse_register", 00:05:06.600 "bdev_opal_new_user", 00:05:06.600 "bdev_opal_set_lock_state", 00:05:06.600 "bdev_opal_delete", 00:05:06.600 "bdev_opal_get_info", 00:05:06.600 "bdev_opal_create", 00:05:06.600 "bdev_nvme_opal_revert", 00:05:06.600 "bdev_nvme_opal_init", 00:05:06.600 "bdev_nvme_send_cmd", 00:05:06.600 "bdev_nvme_set_keys", 00:05:06.600 "bdev_nvme_get_path_iostat", 00:05:06.600 "bdev_nvme_get_mdns_discovery_info", 00:05:06.600 "bdev_nvme_stop_mdns_discovery", 00:05:06.600 "bdev_nvme_start_mdns_discovery", 00:05:06.600 "bdev_nvme_set_multipath_policy", 00:05:06.600 "bdev_nvme_set_preferred_path", 00:05:06.600 "bdev_nvme_get_io_paths", 00:05:06.600 "bdev_nvme_remove_error_injection", 00:05:06.600 "bdev_nvme_add_error_injection", 00:05:06.600 "bdev_nvme_get_discovery_info", 00:05:06.600 "bdev_nvme_stop_discovery", 00:05:06.600 "bdev_nvme_start_discovery", 00:05:06.600 "bdev_nvme_get_controller_health_info", 00:05:06.600 "bdev_nvme_disable_controller", 00:05:06.600 "bdev_nvme_enable_controller", 00:05:06.600 "bdev_nvme_reset_controller", 00:05:06.600 "bdev_nvme_get_transport_statistics", 00:05:06.600 "bdev_nvme_apply_firmware", 00:05:06.600 "bdev_nvme_detach_controller", 00:05:06.600 "bdev_nvme_get_controllers", 00:05:06.600 "bdev_nvme_attach_controller", 00:05:06.600 "bdev_nvme_set_hotplug", 00:05:06.600 "bdev_nvme_set_options", 00:05:06.600 "bdev_passthru_delete", 00:05:06.600 "bdev_passthru_create", 00:05:06.600 "bdev_lvol_set_parent_bdev", 00:05:06.600 "bdev_lvol_set_parent", 00:05:06.600 "bdev_lvol_check_shallow_copy", 00:05:06.600 "bdev_lvol_start_shallow_copy", 00:05:06.600 "bdev_lvol_grow_lvstore", 00:05:06.600 "bdev_lvol_get_lvols", 00:05:06.600 "bdev_lvol_get_lvstores", 00:05:06.600 "bdev_lvol_delete", 00:05:06.600 "bdev_lvol_set_read_only", 00:05:06.600 "bdev_lvol_resize", 00:05:06.600 "bdev_lvol_decouple_parent", 00:05:06.600 "bdev_lvol_inflate", 00:05:06.600 "bdev_lvol_rename", 00:05:06.600 "bdev_lvol_clone_bdev", 00:05:06.600 "bdev_lvol_clone", 00:05:06.600 "bdev_lvol_snapshot", 00:05:06.600 "bdev_lvol_create", 00:05:06.600 "bdev_lvol_delete_lvstore", 00:05:06.600 "bdev_lvol_rename_lvstore", 00:05:06.600 "bdev_lvol_create_lvstore", 00:05:06.600 "bdev_raid_set_options", 00:05:06.600 "bdev_raid_remove_base_bdev", 00:05:06.600 "bdev_raid_add_base_bdev", 00:05:06.600 "bdev_raid_delete", 00:05:06.600 "bdev_raid_create", 00:05:06.600 "bdev_raid_get_bdevs", 00:05:06.600 "bdev_error_inject_error", 00:05:06.600 "bdev_error_delete", 00:05:06.600 "bdev_error_create", 00:05:06.600 "bdev_split_delete", 00:05:06.600 "bdev_split_create", 00:05:06.600 "bdev_delay_delete", 00:05:06.600 "bdev_delay_create", 00:05:06.600 "bdev_delay_update_latency", 00:05:06.600 "bdev_zone_block_delete", 00:05:06.600 "bdev_zone_block_create", 00:05:06.600 "blobfs_create", 00:05:06.600 "blobfs_detect", 00:05:06.600 "blobfs_set_cache_size", 00:05:06.600 "bdev_aio_delete", 00:05:06.600 "bdev_aio_rescan", 00:05:06.600 "bdev_aio_create", 00:05:06.600 "bdev_ftl_set_property", 00:05:06.600 "bdev_ftl_get_properties", 00:05:06.600 "bdev_ftl_get_stats", 00:05:06.600 "bdev_ftl_unmap", 00:05:06.600 "bdev_ftl_unload", 00:05:06.600 "bdev_ftl_delete", 00:05:06.600 "bdev_ftl_load", 00:05:06.600 "bdev_ftl_create", 00:05:06.600 "bdev_virtio_attach_controller", 00:05:06.600 "bdev_virtio_scsi_get_devices", 00:05:06.600 "bdev_virtio_detach_controller", 00:05:06.600 "bdev_virtio_blk_set_hotplug", 00:05:06.600 "bdev_iscsi_delete", 00:05:06.600 "bdev_iscsi_create", 00:05:06.600 "bdev_iscsi_set_options", 00:05:06.600 "accel_error_inject_error", 00:05:06.600 "ioat_scan_accel_module", 00:05:06.600 "dsa_scan_accel_module", 00:05:06.600 "iaa_scan_accel_module", 00:05:06.600 "keyring_file_remove_key", 00:05:06.600 "keyring_file_add_key", 00:05:06.600 "keyring_linux_set_options", 00:05:06.600 "fsdev_aio_delete", 00:05:06.600 "fsdev_aio_create", 00:05:06.600 "iscsi_get_histogram", 00:05:06.600 "iscsi_enable_histogram", 00:05:06.600 "iscsi_set_options", 00:05:06.600 "iscsi_get_auth_groups", 00:05:06.600 "iscsi_auth_group_remove_secret", 00:05:06.600 "iscsi_auth_group_add_secret", 00:05:06.600 "iscsi_delete_auth_group", 00:05:06.600 "iscsi_create_auth_group", 00:05:06.600 "iscsi_set_discovery_auth", 00:05:06.600 "iscsi_get_options", 00:05:06.600 "iscsi_target_node_request_logout", 00:05:06.600 "iscsi_target_node_set_redirect", 00:05:06.600 "iscsi_target_node_set_auth", 00:05:06.600 "iscsi_target_node_add_lun", 00:05:06.600 "iscsi_get_stats", 00:05:06.600 "iscsi_get_connections", 00:05:06.600 "iscsi_portal_group_set_auth", 00:05:06.600 "iscsi_start_portal_group", 00:05:06.600 "iscsi_delete_portal_group", 00:05:06.600 "iscsi_create_portal_group", 00:05:06.600 "iscsi_get_portal_groups", 00:05:06.600 "iscsi_delete_target_node", 00:05:06.600 "iscsi_target_node_remove_pg_ig_maps", 00:05:06.600 "iscsi_target_node_add_pg_ig_maps", 00:05:06.600 "iscsi_create_target_node", 00:05:06.600 "iscsi_get_target_nodes", 00:05:06.600 "iscsi_delete_initiator_group", 00:05:06.600 "iscsi_initiator_group_remove_initiators", 00:05:06.600 "iscsi_initiator_group_add_initiators", 00:05:06.600 "iscsi_create_initiator_group", 00:05:06.600 "iscsi_get_initiator_groups", 00:05:06.600 "nvmf_set_crdt", 00:05:06.600 "nvmf_set_config", 00:05:06.601 "nvmf_set_max_subsystems", 00:05:06.601 "nvmf_stop_mdns_prr", 00:05:06.601 "nvmf_publish_mdns_prr", 00:05:06.601 "nvmf_subsystem_get_listeners", 00:05:06.601 "nvmf_subsystem_get_qpairs", 00:05:06.601 "nvmf_subsystem_get_controllers", 00:05:06.601 "nvmf_get_stats", 00:05:06.601 "nvmf_get_transports", 00:05:06.601 "nvmf_create_transport", 00:05:06.601 "nvmf_get_targets", 00:05:06.601 "nvmf_delete_target", 00:05:06.601 "nvmf_create_target", 00:05:06.601 "nvmf_subsystem_allow_any_host", 00:05:06.601 "nvmf_subsystem_set_keys", 00:05:06.601 "nvmf_subsystem_remove_host", 00:05:06.601 "nvmf_subsystem_add_host", 00:05:06.601 "nvmf_ns_remove_host", 00:05:06.601 "nvmf_ns_add_host", 00:05:06.601 "nvmf_subsystem_remove_ns", 00:05:06.601 "nvmf_subsystem_set_ns_ana_group", 00:05:06.601 "nvmf_subsystem_add_ns", 00:05:06.601 "nvmf_subsystem_listener_set_ana_state", 00:05:06.601 "nvmf_discovery_get_referrals", 00:05:06.601 "nvmf_discovery_remove_referral", 00:05:06.601 "nvmf_discovery_add_referral", 00:05:06.601 "nvmf_subsystem_remove_listener", 00:05:06.601 "nvmf_subsystem_add_listener", 00:05:06.601 "nvmf_delete_subsystem", 00:05:06.601 "nvmf_create_subsystem", 00:05:06.601 "nvmf_get_subsystems", 00:05:06.601 "env_dpdk_get_mem_stats", 00:05:06.601 "nbd_get_disks", 00:05:06.601 "nbd_stop_disk", 00:05:06.601 "nbd_start_disk", 00:05:06.601 "ublk_recover_disk", 00:05:06.601 "ublk_get_disks", 00:05:06.601 "ublk_stop_disk", 00:05:06.601 "ublk_start_disk", 00:05:06.601 "ublk_destroy_target", 00:05:06.601 "ublk_create_target", 00:05:06.601 "virtio_blk_create_transport", 00:05:06.601 "virtio_blk_get_transports", 00:05:06.601 "vhost_controller_set_coalescing", 00:05:06.601 "vhost_get_controllers", 00:05:06.601 "vhost_delete_controller", 00:05:06.601 "vhost_create_blk_controller", 00:05:06.601 "vhost_scsi_controller_remove_target", 00:05:06.601 "vhost_scsi_controller_add_target", 00:05:06.601 "vhost_start_scsi_controller", 00:05:06.601 "vhost_create_scsi_controller", 00:05:06.601 "thread_set_cpumask", 00:05:06.601 "scheduler_set_options", 00:05:06.601 "framework_get_governor", 00:05:06.601 "framework_get_scheduler", 00:05:06.601 "framework_set_scheduler", 00:05:06.601 "framework_get_reactors", 00:05:06.601 "thread_get_io_channels", 00:05:06.601 "thread_get_pollers", 00:05:06.601 "thread_get_stats", 00:05:06.601 "framework_monitor_context_switch", 00:05:06.601 "spdk_kill_instance", 00:05:06.601 "log_enable_timestamps", 00:05:06.601 "log_get_flags", 00:05:06.601 "log_clear_flag", 00:05:06.601 "log_set_flag", 00:05:06.601 "log_get_level", 00:05:06.601 "log_set_level", 00:05:06.601 "log_get_print_level", 00:05:06.601 "log_set_print_level", 00:05:06.601 "framework_enable_cpumask_locks", 00:05:06.601 "framework_disable_cpumask_locks", 00:05:06.601 "framework_wait_init", 00:05:06.601 "framework_start_init", 00:05:06.601 "scsi_get_devices", 00:05:06.601 "bdev_get_histogram", 00:05:06.601 "bdev_enable_histogram", 00:05:06.601 "bdev_set_qos_limit", 00:05:06.601 "bdev_set_qd_sampling_period", 00:05:06.601 "bdev_get_bdevs", 00:05:06.601 "bdev_reset_iostat", 00:05:06.601 "bdev_get_iostat", 00:05:06.601 "bdev_examine", 00:05:06.601 "bdev_wait_for_examine", 00:05:06.601 "bdev_set_options", 00:05:06.601 "accel_get_stats", 00:05:06.601 "accel_set_options", 00:05:06.601 "accel_set_driver", 00:05:06.601 "accel_crypto_key_destroy", 00:05:06.601 "accel_crypto_keys_get", 00:05:06.601 "accel_crypto_key_create", 00:05:06.601 "accel_assign_opc", 00:05:06.601 "accel_get_module_info", 00:05:06.601 "accel_get_opc_assignments", 00:05:06.601 "vmd_rescan", 00:05:06.601 "vmd_remove_device", 00:05:06.601 "vmd_enable", 00:05:06.601 "sock_get_default_impl", 00:05:06.601 "sock_set_default_impl", 00:05:06.601 "sock_impl_set_options", 00:05:06.601 "sock_impl_get_options", 00:05:06.601 "iobuf_get_stats", 00:05:06.601 "iobuf_set_options", 00:05:06.601 "keyring_get_keys", 00:05:06.601 "framework_get_pci_devices", 00:05:06.601 "framework_get_config", 00:05:06.601 "framework_get_subsystems", 00:05:06.601 "fsdev_set_opts", 00:05:06.601 "fsdev_get_opts", 00:05:06.601 "trace_get_info", 00:05:06.601 "trace_get_tpoint_group_mask", 00:05:06.601 "trace_disable_tpoint_group", 00:05:06.601 "trace_enable_tpoint_group", 00:05:06.601 "trace_clear_tpoint_mask", 00:05:06.601 "trace_set_tpoint_mask", 00:05:06.601 "notify_get_notifications", 00:05:06.601 "notify_get_types", 00:05:06.601 "spdk_get_version", 00:05:06.601 "rpc_get_methods" 00:05:06.601 ] 00:05:06.601 17:21:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.601 17:21:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:06.601 17:21:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57906 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57906 ']' 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57906 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57906 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.601 killing process with pid 57906 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57906' 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57906 00:05:06.601 17:21:39 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57906 00:05:09.139 ************************************ 00:05:09.139 END TEST spdkcli_tcp 00:05:09.139 ************************************ 00:05:09.139 00:05:09.139 real 0m4.201s 00:05:09.139 user 0m7.561s 00:05:09.139 sys 0m0.655s 00:05:09.139 17:21:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.139 17:21:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.139 17:21:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:09.139 17:21:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.139 17:21:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.139 17:21:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.139 ************************************ 00:05:09.139 START TEST dpdk_mem_utility 00:05:09.139 ************************************ 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:09.139 * Looking for test storage... 00:05:09.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:09.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.139 17:21:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.139 --rc genhtml_branch_coverage=1 00:05:09.139 --rc genhtml_function_coverage=1 00:05:09.139 --rc genhtml_legend=1 00:05:09.139 --rc geninfo_all_blocks=1 00:05:09.139 --rc geninfo_unexecuted_blocks=1 00:05:09.139 00:05:09.139 ' 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.139 --rc genhtml_branch_coverage=1 00:05:09.139 --rc genhtml_function_coverage=1 00:05:09.139 --rc genhtml_legend=1 00:05:09.139 --rc geninfo_all_blocks=1 00:05:09.139 --rc geninfo_unexecuted_blocks=1 00:05:09.139 00:05:09.139 ' 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.139 --rc genhtml_branch_coverage=1 00:05:09.139 --rc genhtml_function_coverage=1 00:05:09.139 --rc genhtml_legend=1 00:05:09.139 --rc geninfo_all_blocks=1 00:05:09.139 --rc geninfo_unexecuted_blocks=1 00:05:09.139 00:05:09.139 ' 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.139 --rc genhtml_branch_coverage=1 00:05:09.139 --rc genhtml_function_coverage=1 00:05:09.139 --rc genhtml_legend=1 00:05:09.139 --rc geninfo_all_blocks=1 00:05:09.139 --rc geninfo_unexecuted_blocks=1 00:05:09.139 00:05:09.139 ' 00:05:09.139 17:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:09.139 17:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58028 00:05:09.139 17:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.139 17:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58028 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58028 ']' 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.139 17:21:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:09.399 [2024-12-07 17:21:42.603221] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:09.399 [2024-12-07 17:21:42.603944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58028 ] 00:05:09.659 [2024-12-07 17:21:42.788834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.659 [2024-12-07 17:21:42.902634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.598 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.598 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:10.598 17:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:10.598 17:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:10.598 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.598 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.598 { 00:05:10.598 "filename": "/tmp/spdk_mem_dump.txt" 00:05:10.598 } 00:05:10.598 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.598 17:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:10.598 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:10.598 1 heaps totaling size 824.000000 MiB 00:05:10.598 size: 824.000000 MiB heap id: 0 00:05:10.598 end heaps---------- 00:05:10.598 9 mempools totaling size 603.782043 MiB 00:05:10.598 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:10.598 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:10.598 size: 100.555481 MiB name: bdev_io_58028 00:05:10.598 size: 50.003479 MiB name: msgpool_58028 00:05:10.598 size: 36.509338 MiB name: fsdev_io_58028 00:05:10.598 size: 21.763794 MiB name: PDU_Pool 00:05:10.598 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:10.598 size: 4.133484 MiB name: evtpool_58028 00:05:10.598 size: 0.026123 MiB name: Session_Pool 00:05:10.598 end mempools------- 00:05:10.598 6 memzones totaling size 4.142822 MiB 00:05:10.598 size: 1.000366 MiB name: RG_ring_0_58028 00:05:10.598 size: 1.000366 MiB name: RG_ring_1_58028 00:05:10.598 size: 1.000366 MiB name: RG_ring_4_58028 00:05:10.598 size: 1.000366 MiB name: RG_ring_5_58028 00:05:10.598 size: 0.125366 MiB name: RG_ring_2_58028 00:05:10.598 size: 0.015991 MiB name: RG_ring_3_58028 00:05:10.598 end memzones------- 00:05:10.598 17:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:10.598 heap id: 0 total size: 824.000000 MiB number of busy elements: 322 number of free elements: 18 00:05:10.598 list of free elements. size: 16.779663 MiB 00:05:10.598 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:10.598 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:10.598 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:10.598 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:10.598 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:10.598 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:10.598 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:10.598 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:10.598 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:10.598 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:10.598 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:10.598 element at address: 0x20001b400000 with size: 0.561218 MiB 00:05:10.598 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:10.598 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:10.598 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:10.598 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:10.598 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:10.598 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:10.598 list of standard malloc elements. size: 199.289429 MiB 00:05:10.598 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:10.599 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:10.599 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:10.599 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:10.599 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:10.599 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:10.599 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:10.599 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:10.599 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:10.599 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:10.599 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:10.599 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:10.599 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:10.600 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:10.600 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:10.600 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:10.600 list of memzone associated elements. size: 607.930908 MiB 00:05:10.600 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:10.600 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:10.600 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:10.600 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:10.600 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:10.600 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58028_0 00:05:10.600 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:10.600 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58028_0 00:05:10.600 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:10.600 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58028_0 00:05:10.600 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:10.600 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:10.600 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:10.600 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:10.600 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:10.600 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58028_0 00:05:10.600 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:10.600 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58028 00:05:10.600 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:10.600 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58028 00:05:10.600 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:10.600 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:10.600 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:10.600 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:10.600 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:10.600 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:10.600 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:10.600 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:10.600 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:10.600 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58028 00:05:10.600 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:10.600 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58028 00:05:10.600 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:10.600 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58028 00:05:10.600 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:10.600 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58028 00:05:10.600 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:10.600 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58028 00:05:10.600 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:10.600 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58028 00:05:10.600 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:10.600 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:10.600 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:10.600 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:10.600 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:10.600 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:10.600 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:10.600 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58028 00:05:10.600 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:10.600 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58028 00:05:10.600 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:10.600 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:10.600 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:10.600 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:10.600 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:10.601 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58028 00:05:10.601 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:10.601 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:10.601 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:10.601 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58028 00:05:10.601 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:10.601 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58028 00:05:10.601 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:10.601 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58028 00:05:10.601 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:10.601 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:10.601 17:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:10.601 17:21:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58028 00:05:10.601 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58028 ']' 00:05:10.601 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58028 00:05:10.601 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:10.601 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.601 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58028 00:05:10.601 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.601 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.601 killing process with pid 58028 00:05:10.601 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58028' 00:05:10.601 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58028 00:05:10.601 17:21:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58028 00:05:13.140 00:05:13.140 real 0m3.993s 00:05:13.140 user 0m3.956s 00:05:13.140 sys 0m0.565s 00:05:13.140 17:21:46 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.140 ************************************ 00:05:13.140 END TEST dpdk_mem_utility 00:05:13.140 ************************************ 00:05:13.140 17:21:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.140 17:21:46 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:13.140 17:21:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.140 17:21:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.140 17:21:46 -- common/autotest_common.sh@10 -- # set +x 00:05:13.140 ************************************ 00:05:13.140 START TEST event 00:05:13.140 ************************************ 00:05:13.140 17:21:46 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:13.140 * Looking for test storage... 00:05:13.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:13.140 17:21:46 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.140 17:21:46 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.140 17:21:46 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.400 17:21:46 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.400 17:21:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.400 17:21:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.400 17:21:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.400 17:21:46 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.400 17:21:46 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.400 17:21:46 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.400 17:21:46 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.400 17:21:46 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.400 17:21:46 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.400 17:21:46 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.400 17:21:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.400 17:21:46 event -- scripts/common.sh@344 -- # case "$op" in 00:05:13.400 17:21:46 event -- scripts/common.sh@345 -- # : 1 00:05:13.400 17:21:46 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.400 17:21:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.400 17:21:46 event -- scripts/common.sh@365 -- # decimal 1 00:05:13.400 17:21:46 event -- scripts/common.sh@353 -- # local d=1 00:05:13.400 17:21:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.400 17:21:46 event -- scripts/common.sh@355 -- # echo 1 00:05:13.400 17:21:46 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.400 17:21:46 event -- scripts/common.sh@366 -- # decimal 2 00:05:13.400 17:21:46 event -- scripts/common.sh@353 -- # local d=2 00:05:13.400 17:21:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.400 17:21:46 event -- scripts/common.sh@355 -- # echo 2 00:05:13.400 17:21:46 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.400 17:21:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.400 17:21:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.400 17:21:46 event -- scripts/common.sh@368 -- # return 0 00:05:13.400 17:21:46 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.400 17:21:46 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.400 --rc genhtml_branch_coverage=1 00:05:13.400 --rc genhtml_function_coverage=1 00:05:13.400 --rc genhtml_legend=1 00:05:13.400 --rc geninfo_all_blocks=1 00:05:13.400 --rc geninfo_unexecuted_blocks=1 00:05:13.400 00:05:13.400 ' 00:05:13.400 17:21:46 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.400 --rc genhtml_branch_coverage=1 00:05:13.400 --rc genhtml_function_coverage=1 00:05:13.400 --rc genhtml_legend=1 00:05:13.400 --rc geninfo_all_blocks=1 00:05:13.400 --rc geninfo_unexecuted_blocks=1 00:05:13.400 00:05:13.400 ' 00:05:13.400 17:21:46 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.400 --rc genhtml_branch_coverage=1 00:05:13.400 --rc genhtml_function_coverage=1 00:05:13.400 --rc genhtml_legend=1 00:05:13.400 --rc geninfo_all_blocks=1 00:05:13.400 --rc geninfo_unexecuted_blocks=1 00:05:13.400 00:05:13.400 ' 00:05:13.400 17:21:46 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.400 --rc genhtml_branch_coverage=1 00:05:13.400 --rc genhtml_function_coverage=1 00:05:13.400 --rc genhtml_legend=1 00:05:13.400 --rc geninfo_all_blocks=1 00:05:13.400 --rc geninfo_unexecuted_blocks=1 00:05:13.400 00:05:13.400 ' 00:05:13.400 17:21:46 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:13.400 17:21:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:13.400 17:21:46 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.400 17:21:46 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:13.400 17:21:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.400 17:21:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.400 ************************************ 00:05:13.400 START TEST event_perf 00:05:13.400 ************************************ 00:05:13.400 17:21:46 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.400 Running I/O for 1 seconds...[2024-12-07 17:21:46.651339] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:13.400 [2024-12-07 17:21:46.651556] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58136 ] 00:05:13.660 [2024-12-07 17:21:46.836744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.660 [2024-12-07 17:21:46.958520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.660 [2024-12-07 17:21:46.958574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.660 [2024-12-07 17:21:46.958752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.660 Running I/O for 1 seconds...[2024-12-07 17:21:46.958775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.041 00:05:15.041 lcore 0: 83634 00:05:15.041 lcore 1: 83625 00:05:15.041 lcore 2: 83628 00:05:15.041 lcore 3: 83630 00:05:15.041 done. 00:05:15.041 00:05:15.041 real 0m1.607s 00:05:15.041 user 0m4.348s 00:05:15.041 sys 0m0.132s 00:05:15.041 17:21:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.041 17:21:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.041 ************************************ 00:05:15.041 END TEST event_perf 00:05:15.041 ************************************ 00:05:15.041 17:21:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:15.041 17:21:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:15.041 17:21:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.041 17:21:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.041 ************************************ 00:05:15.041 START TEST event_reactor 00:05:15.041 ************************************ 00:05:15.041 17:21:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:15.041 [2024-12-07 17:21:48.325656] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:15.041 [2024-12-07 17:21:48.325801] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58181 ] 00:05:15.300 [2024-12-07 17:21:48.506308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.300 [2024-12-07 17:21:48.613397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.741 test_start 00:05:16.741 oneshot 00:05:16.741 tick 100 00:05:16.741 tick 100 00:05:16.741 tick 250 00:05:16.741 tick 100 00:05:16.741 tick 100 00:05:16.741 tick 100 00:05:16.741 tick 250 00:05:16.741 tick 500 00:05:16.741 tick 100 00:05:16.741 tick 100 00:05:16.741 tick 250 00:05:16.741 tick 100 00:05:16.741 tick 100 00:05:16.741 test_end 00:05:16.741 00:05:16.741 real 0m1.560s 00:05:16.741 user 0m1.354s 00:05:16.741 sys 0m0.098s 00:05:16.741 17:21:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.741 17:21:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:16.741 ************************************ 00:05:16.741 END TEST event_reactor 00:05:16.741 ************************************ 00:05:16.741 17:21:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.741 17:21:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:16.741 17:21:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.741 17:21:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.741 ************************************ 00:05:16.741 START TEST event_reactor_perf 00:05:16.741 ************************************ 00:05:16.741 17:21:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.741 [2024-12-07 17:21:49.957975] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:16.741 [2024-12-07 17:21:49.958197] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58212 ] 00:05:17.000 [2024-12-07 17:21:50.140140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.000 [2024-12-07 17:21:50.255731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.380 test_start 00:05:18.380 test_end 00:05:18.380 Performance: 386835 events per second 00:05:18.380 00:05:18.380 real 0m1.569s 00:05:18.380 user 0m1.358s 00:05:18.380 sys 0m0.102s 00:05:18.380 ************************************ 00:05:18.380 END TEST event_reactor_perf 00:05:18.380 ************************************ 00:05:18.380 17:21:51 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.380 17:21:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.380 17:21:51 event -- event/event.sh@49 -- # uname -s 00:05:18.380 17:21:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:18.380 17:21:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.380 17:21:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.380 17:21:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.380 17:21:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.380 ************************************ 00:05:18.380 START TEST event_scheduler 00:05:18.380 ************************************ 00:05:18.380 17:21:51 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.380 * Looking for test storage... 00:05:18.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:18.380 17:21:51 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.380 17:21:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.380 17:21:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.380 17:21:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.380 17:21:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.640 17:21:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:18.640 17:21:51 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.640 17:21:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.640 --rc genhtml_branch_coverage=1 00:05:18.640 --rc genhtml_function_coverage=1 00:05:18.640 --rc genhtml_legend=1 00:05:18.640 --rc geninfo_all_blocks=1 00:05:18.640 --rc geninfo_unexecuted_blocks=1 00:05:18.640 00:05:18.640 ' 00:05:18.640 17:21:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.640 --rc genhtml_branch_coverage=1 00:05:18.640 --rc genhtml_function_coverage=1 00:05:18.640 --rc genhtml_legend=1 00:05:18.640 --rc geninfo_all_blocks=1 00:05:18.640 --rc geninfo_unexecuted_blocks=1 00:05:18.640 00:05:18.640 ' 00:05:18.641 17:21:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.641 --rc genhtml_branch_coverage=1 00:05:18.641 --rc genhtml_function_coverage=1 00:05:18.641 --rc genhtml_legend=1 00:05:18.641 --rc geninfo_all_blocks=1 00:05:18.641 --rc geninfo_unexecuted_blocks=1 00:05:18.641 00:05:18.641 ' 00:05:18.641 17:21:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.641 --rc genhtml_branch_coverage=1 00:05:18.641 --rc genhtml_function_coverage=1 00:05:18.641 --rc genhtml_legend=1 00:05:18.641 --rc geninfo_all_blocks=1 00:05:18.641 --rc geninfo_unexecuted_blocks=1 00:05:18.641 00:05:18.641 ' 00:05:18.641 17:21:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:18.641 17:21:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58288 00:05:18.641 17:21:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:18.641 17:21:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.641 17:21:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58288 00:05:18.641 17:21:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58288 ']' 00:05:18.641 17:21:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.641 17:21:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.641 17:21:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.641 17:21:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.641 17:21:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.641 [2024-12-07 17:21:51.868791] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:18.641 [2024-12-07 17:21:51.868915] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58288 ] 00:05:18.900 [2024-12-07 17:21:52.045900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.900 [2024-12-07 17:21:52.209879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.900 [2024-12-07 17:21:52.210039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.900 [2024-12-07 17:21:52.210200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.900 [2024-12-07 17:21:52.210233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.470 17:21:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.471 17:21:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:19.471 17:21:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:19.471 17:21:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.471 17:21:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.471 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.471 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.471 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.471 POWER: Cannot set governor of lcore 0 to performance 00:05:19.471 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.471 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.471 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.471 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.471 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:19.471 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:19.471 POWER: Unable to set Power Management Environment for lcore 0 00:05:19.471 [2024-12-07 17:21:52.695827] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:19.471 [2024-12-07 17:21:52.695885] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:19.471 [2024-12-07 17:21:52.695924] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:19.471 [2024-12-07 17:21:52.695993] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:19.471 [2024-12-07 17:21:52.696027] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:19.471 [2024-12-07 17:21:52.696064] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:19.471 17:21:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.471 17:21:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:19.471 17:21:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.471 17:21:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 [2024-12-07 17:21:53.125545] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:20.040 17:21:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.040 17:21:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:20.041 17:21:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.041 17:21:53 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 ************************************ 00:05:20.041 START TEST scheduler_create_thread 00:05:20.041 ************************************ 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 2 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 3 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 4 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 5 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 6 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 7 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 8 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 9 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 10 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.041 17:21:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.423 17:21:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.423 17:21:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:21.423 17:21:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:21.423 17:21:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.423 17:21:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.362 17:21:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.362 17:21:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.362 17:21:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.362 17:21:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.930 17:21:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.930 17:21:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.930 17:21:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.930 17:21:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.930 17:21:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.870 17:21:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.870 ************************************ 00:05:23.870 END TEST scheduler_create_thread 00:05:23.870 ************************************ 00:05:23.870 00:05:23.870 real 0m3.886s 00:05:23.870 user 0m0.034s 00:05:23.870 sys 0m0.005s 00:05:23.870 17:21:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.870 17:21:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.870 17:21:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:23.870 17:21:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58288 00:05:23.870 17:21:57 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58288 ']' 00:05:23.870 17:21:57 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58288 00:05:23.870 17:21:57 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:23.870 17:21:57 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.870 17:21:57 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58288 00:05:23.870 killing process with pid 58288 00:05:23.870 17:21:57 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:23.870 17:21:57 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:23.870 17:21:57 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58288' 00:05:23.870 17:21:57 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58288 00:05:23.870 17:21:57 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58288 00:05:24.130 [2024-12-07 17:21:57.405638] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:25.510 00:05:25.510 real 0m7.294s 00:05:25.510 user 0m14.858s 00:05:25.510 sys 0m0.580s 00:05:25.510 ************************************ 00:05:25.510 END TEST event_scheduler 00:05:25.510 ************************************ 00:05:25.510 17:21:58 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.510 17:21:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.770 17:21:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:25.770 17:21:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:25.770 17:21:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.770 17:21:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.770 17:21:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.770 ************************************ 00:05:25.770 START TEST app_repeat 00:05:25.770 ************************************ 00:05:25.770 17:21:58 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58416 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58416' 00:05:25.770 Process app_repeat pid: 58416 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:25.770 spdk_app_start Round 0 00:05:25.770 17:21:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58416 /var/tmp/spdk-nbd.sock 00:05:25.770 17:21:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58416 ']' 00:05:25.770 17:21:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.770 17:21:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.770 17:21:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.770 17:21:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.770 17:21:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.770 [2024-12-07 17:21:58.988303] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:25.770 [2024-12-07 17:21:58.988469] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58416 ] 00:05:26.030 [2024-12-07 17:21:59.165635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.030 [2024-12-07 17:21:59.285480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.030 [2024-12-07 17:21:59.285522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.599 17:21:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.599 17:21:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.599 17:21:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.862 Malloc0 00:05:26.862 17:22:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.430 Malloc1 00:05:27.430 17:22:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.430 /dev/nbd0 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.430 1+0 records in 00:05:27.430 1+0 records out 00:05:27.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242512 s, 16.9 MB/s 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.430 17:22:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.430 17:22:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.694 /dev/nbd1 00:05:27.694 17:22:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.694 17:22:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.694 1+0 records in 00:05:27.694 1+0 records out 00:05:27.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286028 s, 14.3 MB/s 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.694 17:22:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.694 17:22:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.694 17:22:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.694 17:22:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.694 17:22:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.694 17:22:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.960 17:22:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.960 { 00:05:27.960 "nbd_device": "/dev/nbd0", 00:05:27.960 "bdev_name": "Malloc0" 00:05:27.960 }, 00:05:27.960 { 00:05:27.960 "nbd_device": "/dev/nbd1", 00:05:27.960 "bdev_name": "Malloc1" 00:05:27.960 } 00:05:27.960 ]' 00:05:27.960 17:22:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.960 { 00:05:27.960 "nbd_device": "/dev/nbd0", 00:05:27.960 "bdev_name": "Malloc0" 00:05:27.960 }, 00:05:27.960 { 00:05:27.960 "nbd_device": "/dev/nbd1", 00:05:27.960 "bdev_name": "Malloc1" 00:05:27.960 } 00:05:27.960 ]' 00:05:27.960 17:22:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.219 /dev/nbd1' 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.219 /dev/nbd1' 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.219 256+0 records in 00:05:28.219 256+0 records out 00:05:28.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145167 s, 72.2 MB/s 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.219 256+0 records in 00:05:28.219 256+0 records out 00:05:28.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024184 s, 43.4 MB/s 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.219 256+0 records in 00:05:28.219 256+0 records out 00:05:28.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311418 s, 33.7 MB/s 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.219 17:22:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.476 17:22:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.476 17:22:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.476 17:22:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.476 17:22:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.477 17:22:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.477 17:22:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.477 17:22:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.477 17:22:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.477 17:22:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.477 17:22:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.735 17:22:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.994 17:22:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.994 17:22:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.561 17:22:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.500 [2024-12-07 17:22:03.778284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.759 [2024-12-07 17:22:03.888444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.759 [2024-12-07 17:22:03.888446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.759 [2024-12-07 17:22:04.077229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.759 [2024-12-07 17:22:04.077397] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.665 17:22:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.665 spdk_app_start Round 1 00:05:32.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.665 17:22:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:32.665 17:22:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58416 /var/tmp/spdk-nbd.sock 00:05:32.665 17:22:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58416 ']' 00:05:32.665 17:22:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.665 17:22:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.665 17:22:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.665 17:22:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.665 17:22:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.665 17:22:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.665 17:22:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.665 17:22:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.924 Malloc0 00:05:32.924 17:22:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.184 Malloc1 00:05:33.184 17:22:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.184 17:22:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.444 /dev/nbd0 00:05:33.444 17:22:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.444 17:22:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.444 1+0 records in 00:05:33.444 1+0 records out 00:05:33.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552773 s, 7.4 MB/s 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.444 17:22:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:33.444 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.444 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.444 17:22:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.704 /dev/nbd1 00:05:33.704 17:22:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.704 17:22:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.704 1+0 records in 00:05:33.704 1+0 records out 00:05:33.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532798 s, 7.7 MB/s 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.704 17:22:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:33.704 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.704 17:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.704 17:22:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.704 17:22:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.704 17:22:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.964 { 00:05:33.964 "nbd_device": "/dev/nbd0", 00:05:33.964 "bdev_name": "Malloc0" 00:05:33.964 }, 00:05:33.964 { 00:05:33.964 "nbd_device": "/dev/nbd1", 00:05:33.964 "bdev_name": "Malloc1" 00:05:33.964 } 00:05:33.964 ]' 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.964 { 00:05:33.964 "nbd_device": "/dev/nbd0", 00:05:33.964 "bdev_name": "Malloc0" 00:05:33.964 }, 00:05:33.964 { 00:05:33.964 "nbd_device": "/dev/nbd1", 00:05:33.964 "bdev_name": "Malloc1" 00:05:33.964 } 00:05:33.964 ]' 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.964 /dev/nbd1' 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.964 /dev/nbd1' 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.964 256+0 records in 00:05:33.964 256+0 records out 00:05:33.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458649 s, 229 MB/s 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.964 256+0 records in 00:05:33.964 256+0 records out 00:05:33.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279812 s, 37.5 MB/s 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.964 256+0 records in 00:05:33.964 256+0 records out 00:05:33.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292906 s, 35.8 MB/s 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.964 17:22:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.965 17:22:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.965 17:22:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.965 17:22:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.965 17:22:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.965 17:22:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.965 17:22:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.965 17:22:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.965 17:22:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.965 17:22:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.965 17:22:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.224 17:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.224 17:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.224 17:22:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.224 17:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.224 17:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.224 17:22:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.224 17:22:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.224 17:22:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.224 17:22:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.224 17:22:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.485 17:22:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.745 17:22:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.745 17:22:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.005 17:22:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.385 [2024-12-07 17:22:09.488209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.385 [2024-12-07 17:22:09.598055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.385 [2024-12-07 17:22:09.598098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.685 [2024-12-07 17:22:09.783554] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.685 [2024-12-07 17:22:09.783684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.061 17:22:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.061 spdk_app_start Round 2 00:05:38.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.061 17:22:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:38.061 17:22:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58416 /var/tmp/spdk-nbd.sock 00:05:38.061 17:22:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58416 ']' 00:05:38.062 17:22:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.062 17:22:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.062 17:22:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.062 17:22:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.062 17:22:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.321 17:22:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.321 17:22:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:38.321 17:22:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.580 Malloc0 00:05:38.580 17:22:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.839 Malloc1 00:05:38.839 17:22:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.839 17:22:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.099 /dev/nbd0 00:05:39.099 17:22:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.099 17:22:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.099 1+0 records in 00:05:39.099 1+0 records out 00:05:39.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214136 s, 19.1 MB/s 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.099 17:22:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.099 17:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.099 17:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.099 17:22:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.359 /dev/nbd1 00:05:39.359 17:22:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.359 17:22:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.359 1+0 records in 00:05:39.359 1+0 records out 00:05:39.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439863 s, 9.3 MB/s 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.359 17:22:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.359 17:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.359 17:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.359 17:22:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.359 17:22:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.359 17:22:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.619 { 00:05:39.619 "nbd_device": "/dev/nbd0", 00:05:39.619 "bdev_name": "Malloc0" 00:05:39.619 }, 00:05:39.619 { 00:05:39.619 "nbd_device": "/dev/nbd1", 00:05:39.619 "bdev_name": "Malloc1" 00:05:39.619 } 00:05:39.619 ]' 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.619 { 00:05:39.619 "nbd_device": "/dev/nbd0", 00:05:39.619 "bdev_name": "Malloc0" 00:05:39.619 }, 00:05:39.619 { 00:05:39.619 "nbd_device": "/dev/nbd1", 00:05:39.619 "bdev_name": "Malloc1" 00:05:39.619 } 00:05:39.619 ]' 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.619 /dev/nbd1' 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.619 /dev/nbd1' 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.619 256+0 records in 00:05:39.619 256+0 records out 00:05:39.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125935 s, 83.3 MB/s 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.619 256+0 records in 00:05:39.619 256+0 records out 00:05:39.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229671 s, 45.7 MB/s 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.619 256+0 records in 00:05:39.619 256+0 records out 00:05:39.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266627 s, 39.3 MB/s 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.619 17:22:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.879 17:22:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.138 17:22:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.398 17:22:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.398 17:22:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.967 17:22:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.903 [2024-12-07 17:22:15.211542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.162 [2024-12-07 17:22:15.318494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.162 [2024-12-07 17:22:15.318500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.162 [2024-12-07 17:22:15.511734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.162 [2024-12-07 17:22:15.511798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.081 17:22:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58416 /var/tmp/spdk-nbd.sock 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58416 ']' 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.081 17:22:17 event.app_repeat -- event/event.sh@39 -- # killprocess 58416 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58416 ']' 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58416 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58416 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.081 killing process with pid 58416 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58416' 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58416 00:05:44.081 17:22:17 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58416 00:05:45.021 spdk_app_start is called in Round 0. 00:05:45.021 Shutdown signal received, stop current app iteration 00:05:45.021 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:45.021 spdk_app_start is called in Round 1. 00:05:45.021 Shutdown signal received, stop current app iteration 00:05:45.021 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:45.021 spdk_app_start is called in Round 2. 00:05:45.021 Shutdown signal received, stop current app iteration 00:05:45.021 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:45.021 spdk_app_start is called in Round 3. 00:05:45.021 Shutdown signal received, stop current app iteration 00:05:45.021 17:22:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.021 17:22:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:45.021 00:05:45.021 real 0m19.477s 00:05:45.021 user 0m42.044s 00:05:45.021 sys 0m2.570s 00:05:45.021 17:22:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.021 17:22:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.021 ************************************ 00:05:45.021 END TEST app_repeat 00:05:45.021 ************************************ 00:05:45.283 17:22:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.283 17:22:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.283 17:22:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.283 17:22:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.283 17:22:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.283 ************************************ 00:05:45.283 START TEST cpu_locks 00:05:45.283 ************************************ 00:05:45.283 17:22:18 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.283 * Looking for test storage... 00:05:45.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.283 17:22:18 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.283 17:22:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.283 17:22:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.554 17:22:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.554 17:22:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:45.554 17:22:18 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.554 17:22:18 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.554 --rc genhtml_branch_coverage=1 00:05:45.554 --rc genhtml_function_coverage=1 00:05:45.554 --rc genhtml_legend=1 00:05:45.554 --rc geninfo_all_blocks=1 00:05:45.554 --rc geninfo_unexecuted_blocks=1 00:05:45.554 00:05:45.554 ' 00:05:45.554 17:22:18 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.554 --rc genhtml_branch_coverage=1 00:05:45.554 --rc genhtml_function_coverage=1 00:05:45.554 --rc genhtml_legend=1 00:05:45.554 --rc geninfo_all_blocks=1 00:05:45.554 --rc geninfo_unexecuted_blocks=1 00:05:45.554 00:05:45.554 ' 00:05:45.554 17:22:18 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.554 --rc genhtml_branch_coverage=1 00:05:45.554 --rc genhtml_function_coverage=1 00:05:45.554 --rc genhtml_legend=1 00:05:45.554 --rc geninfo_all_blocks=1 00:05:45.554 --rc geninfo_unexecuted_blocks=1 00:05:45.554 00:05:45.554 ' 00:05:45.554 17:22:18 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.554 --rc genhtml_branch_coverage=1 00:05:45.554 --rc genhtml_function_coverage=1 00:05:45.554 --rc genhtml_legend=1 00:05:45.554 --rc geninfo_all_blocks=1 00:05:45.554 --rc geninfo_unexecuted_blocks=1 00:05:45.554 00:05:45.554 ' 00:05:45.554 17:22:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:45.554 17:22:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:45.554 17:22:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:45.554 17:22:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:45.554 17:22:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.554 17:22:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.554 17:22:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.554 ************************************ 00:05:45.554 START TEST default_locks 00:05:45.554 ************************************ 00:05:45.554 17:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:45.554 17:22:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58863 00:05:45.554 17:22:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.554 17:22:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58863 00:05:45.554 17:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58863 ']' 00:05:45.554 17:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.554 17:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.554 17:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.554 17:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.554 17:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.554 [2024-12-07 17:22:18.815208] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:45.554 [2024-12-07 17:22:18.815347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58863 ] 00:05:45.814 [2024-12-07 17:22:18.994654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.814 [2024-12-07 17:22:19.103790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.754 17:22:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.754 17:22:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:46.754 17:22:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58863 00:05:46.754 17:22:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58863 00:05:46.754 17:22:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58863 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58863 ']' 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58863 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58863 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.324 killing process with pid 58863 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58863' 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58863 00:05:47.324 17:22:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58863 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58863 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58863 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58863 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58863 ']' 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.863 ERROR: process (pid: 58863) is no longer running 00:05:49.863 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58863) - No such process 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.863 00:05:49.863 real 0m4.129s 00:05:49.863 user 0m4.049s 00:05:49.863 sys 0m0.692s 00:05:49.863 ************************************ 00:05:49.863 END TEST default_locks 00:05:49.863 ************************************ 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.863 17:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.863 17:22:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:49.863 17:22:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.863 17:22:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.863 17:22:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.863 ************************************ 00:05:49.863 START TEST default_locks_via_rpc 00:05:49.863 ************************************ 00:05:49.863 17:22:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:49.863 17:22:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58940 00:05:49.864 17:22:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.864 17:22:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58940 00:05:49.864 17:22:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58940 ']' 00:05:49.864 17:22:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.864 17:22:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.864 17:22:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.864 17:22:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.864 17:22:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.864 [2024-12-07 17:22:23.006140] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:49.864 [2024-12-07 17:22:23.006256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58940 ] 00:05:49.864 [2024-12-07 17:22:23.180105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.124 [2024-12-07 17:22:23.296861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58940 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58940 00:05:51.064 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58940 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58940 ']' 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58940 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58940 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58940' 00:05:51.324 killing process with pid 58940 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58940 00:05:51.324 17:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58940 00:05:53.860 00:05:53.860 real 0m3.987s 00:05:53.860 user 0m3.937s 00:05:53.860 sys 0m0.596s 00:05:53.860 17:22:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.860 17:22:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.860 ************************************ 00:05:53.860 END TEST default_locks_via_rpc 00:05:53.860 ************************************ 00:05:53.860 17:22:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:53.860 17:22:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.860 17:22:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.860 17:22:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.860 ************************************ 00:05:53.860 START TEST non_locking_app_on_locked_coremask 00:05:53.860 ************************************ 00:05:53.860 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:53.860 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59009 00:05:53.860 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.860 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59009 /var/tmp/spdk.sock 00:05:53.860 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59009 ']' 00:05:53.860 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.860 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.860 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.860 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.860 17:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.860 [2024-12-07 17:22:27.060526] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:53.860 [2024-12-07 17:22:27.060721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59009 ] 00:05:53.860 [2024-12-07 17:22:27.233997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.155 [2024-12-07 17:22:27.353527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59030 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59030 /var/tmp/spdk2.sock 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59030 ']' 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.115 17:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.115 [2024-12-07 17:22:28.305177] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:55.115 [2024-12-07 17:22:28.305367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59030 ] 00:05:55.115 [2024-12-07 17:22:28.474844] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.115 [2024-12-07 17:22:28.474899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.375 [2024-12-07 17:22:28.705757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.916 17:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.916 17:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.916 17:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59009 00:05:57.916 17:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59009 00:05:57.916 17:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59009 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59009 ']' 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59009 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59009 00:05:57.916 killing process with pid 59009 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59009' 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59009 00:05:57.916 17:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59009 00:06:03.196 17:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59030 00:06:03.196 17:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59030 ']' 00:06:03.196 17:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59030 00:06:03.196 17:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.196 17:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.196 17:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59030 00:06:03.196 killing process with pid 59030 00:06:03.196 17:22:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.197 17:22:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.197 17:22:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59030' 00:06:03.197 17:22:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59030 00:06:03.197 17:22:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59030 00:06:05.105 00:06:05.105 real 0m11.400s 00:06:05.105 user 0m11.638s 00:06:05.105 sys 0m1.155s 00:06:05.105 17:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.105 17:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.105 ************************************ 00:06:05.105 END TEST non_locking_app_on_locked_coremask 00:06:05.105 ************************************ 00:06:05.105 17:22:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.105 17:22:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.105 17:22:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.105 17:22:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.105 ************************************ 00:06:05.105 START TEST locking_app_on_unlocked_coremask 00:06:05.105 ************************************ 00:06:05.105 17:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:05.105 17:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59174 00:06:05.105 17:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.105 17:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59174 /var/tmp/spdk.sock 00:06:05.105 17:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59174 ']' 00:06:05.105 17:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.105 17:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.105 17:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.105 17:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.105 17:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.365 [2024-12-07 17:22:38.539823] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:05.365 [2024-12-07 17:22:38.540541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59174 ] 00:06:05.365 [2024-12-07 17:22:38.721001] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.365 [2024-12-07 17:22:38.721063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.624 [2024-12-07 17:22:38.835602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59195 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59195 /var/tmp/spdk2.sock 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59195 ']' 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.562 17:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.562 [2024-12-07 17:22:39.789240] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:06.562 [2024-12-07 17:22:39.789469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59195 ] 00:06:06.821 [2024-12-07 17:22:39.967824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.821 [2024-12-07 17:22:40.198276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.358 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.358 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.358 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59195 00:06:09.358 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59195 00:06:09.358 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59174 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59174 ']' 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59174 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59174 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.617 killing process with pid 59174 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59174' 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59174 00:06:09.617 17:22:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59174 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59195 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59195 ']' 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59195 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59195 00:06:14.917 killing process with pid 59195 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59195' 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59195 00:06:14.917 17:22:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59195 00:06:16.823 00:06:16.823 real 0m11.593s 00:06:16.824 user 0m11.828s 00:06:16.824 sys 0m1.227s 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.824 ************************************ 00:06:16.824 END TEST locking_app_on_unlocked_coremask 00:06:16.824 ************************************ 00:06:16.824 17:22:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:16.824 17:22:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.824 17:22:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.824 17:22:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.824 ************************************ 00:06:16.824 START TEST locking_app_on_locked_coremask 00:06:16.824 ************************************ 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59346 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59346 /var/tmp/spdk.sock 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59346 ']' 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.824 17:22:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.084 [2024-12-07 17:22:50.205248] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:17.084 [2024-12-07 17:22:50.205527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59346 ] 00:06:17.084 [2024-12-07 17:22:50.382640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.344 [2024-12-07 17:22:50.522771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59362 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59362 /var/tmp/spdk2.sock 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59362 /var/tmp/spdk2.sock 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59362 /var/tmp/spdk2.sock 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59362 ']' 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.283 17:22:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.283 [2024-12-07 17:22:51.643179] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:18.283 [2024-12-07 17:22:51.643407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59362 ] 00:06:18.542 [2024-12-07 17:22:51.816829] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59346 has claimed it. 00:06:18.542 [2024-12-07 17:22:51.816913] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.111 ERROR: process (pid: 59362) is no longer running 00:06:19.111 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59362) - No such process 00:06:19.111 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.111 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:19.111 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:19.111 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.111 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.111 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.111 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59346 00:06:19.111 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59346 00:06:19.111 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59346 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59346 ']' 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59346 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59346 00:06:19.371 killing process with pid 59346 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59346' 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59346 00:06:19.371 17:22:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59346 00:06:21.908 00:06:21.908 real 0m5.090s 00:06:21.908 user 0m5.056s 00:06:21.908 sys 0m0.925s 00:06:21.908 17:22:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.908 17:22:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.908 ************************************ 00:06:21.908 END TEST locking_app_on_locked_coremask 00:06:21.908 ************************************ 00:06:21.908 17:22:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.908 17:22:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.908 17:22:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.908 17:22:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.908 ************************************ 00:06:21.908 START TEST locking_overlapped_coremask 00:06:21.908 ************************************ 00:06:21.908 17:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:21.908 17:22:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59437 00:06:21.908 17:22:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.908 17:22:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59437 /var/tmp/spdk.sock 00:06:21.908 17:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59437 ']' 00:06:21.908 17:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.908 17:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.908 17:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.908 17:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.908 17:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.166 [2024-12-07 17:22:55.358067] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:22.166 [2024-12-07 17:22:55.358266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59437 ] 00:06:22.166 [2024-12-07 17:22:55.534497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.426 [2024-12-07 17:22:55.676199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.426 [2024-12-07 17:22:55.676372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.426 [2024-12-07 17:22:55.676417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59455 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59455 /var/tmp/spdk2.sock 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59455 /var/tmp/spdk2.sock 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59455 /var/tmp/spdk2.sock 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59455 ']' 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.381 17:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.641 [2024-12-07 17:22:56.797405] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:23.641 [2024-12-07 17:22:56.797592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59455 ] 00:06:23.641 [2024-12-07 17:22:56.970578] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59437 has claimed it. 00:06:23.641 [2024-12-07 17:22:56.970668] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.210 ERROR: process (pid: 59455) is no longer running 00:06:24.210 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59455) - No such process 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59437 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59437 ']' 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59437 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59437 00:06:24.210 killing process with pid 59437 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59437' 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59437 00:06:24.210 17:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59437 00:06:27.503 00:06:27.503 real 0m4.939s 00:06:27.503 user 0m13.269s 00:06:27.503 sys 0m0.743s 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.503 ************************************ 00:06:27.503 END TEST locking_overlapped_coremask 00:06:27.503 ************************************ 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.503 17:23:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.503 17:23:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.503 17:23:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.503 17:23:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.503 ************************************ 00:06:27.503 START TEST locking_overlapped_coremask_via_rpc 00:06:27.503 ************************************ 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59524 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59524 /var/tmp/spdk.sock 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59524 ']' 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.503 17:23:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.503 [2024-12-07 17:23:00.367352] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:27.503 [2024-12-07 17:23:00.367599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59524 ] 00:06:27.503 [2024-12-07 17:23:00.549838] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.503 [2024-12-07 17:23:00.549910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.503 [2024-12-07 17:23:00.691968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.503 [2024-12-07 17:23:00.692074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.503 [2024-12-07 17:23:00.692101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.437 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.438 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:28.438 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59548 00:06:28.438 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.438 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59548 /var/tmp/spdk2.sock 00:06:28.438 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59548 ']' 00:06:28.438 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.438 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.438 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.438 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.438 17:23:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.696 [2024-12-07 17:23:01.835362] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:28.696 [2024-12-07 17:23:01.836149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59548 ] 00:06:28.696 [2024-12-07 17:23:02.015724] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.696 [2024-12-07 17:23:02.015792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.954 [2024-12-07 17:23:02.305999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.954 [2024-12-07 17:23:02.309032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.954 [2024-12-07 17:23:02.309050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.489 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.489 [2024-12-07 17:23:04.464100] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59524 has claimed it. 00:06:31.489 request: 00:06:31.489 { 00:06:31.489 "method": "framework_enable_cpumask_locks", 00:06:31.489 "req_id": 1 00:06:31.489 } 00:06:31.489 Got JSON-RPC error response 00:06:31.489 response: 00:06:31.489 { 00:06:31.489 "code": -32603, 00:06:31.490 "message": "Failed to claim CPU core: 2" 00:06:31.490 } 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59524 /var/tmp/spdk.sock 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59524 ']' 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59548 /var/tmp/spdk2.sock 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59548 ']' 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.490 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.749 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.749 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.749 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:31.749 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.749 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.749 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.749 00:06:31.749 real 0m4.644s 00:06:31.749 user 0m1.258s 00:06:31.749 sys 0m0.232s 00:06:31.749 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.749 17:23:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.749 ************************************ 00:06:31.749 END TEST locking_overlapped_coremask_via_rpc 00:06:31.749 ************************************ 00:06:31.749 17:23:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:31.749 17:23:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59524 ]] 00:06:31.749 17:23:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59524 00:06:31.749 17:23:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59524 ']' 00:06:31.749 17:23:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59524 00:06:31.749 17:23:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:31.749 17:23:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.749 17:23:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59524 00:06:31.749 17:23:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.749 17:23:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.749 killing process with pid 59524 00:06:31.749 17:23:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59524' 00:06:31.749 17:23:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59524 00:06:31.749 17:23:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59524 00:06:35.080 17:23:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59548 ]] 00:06:35.080 17:23:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59548 00:06:35.080 17:23:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59548 ']' 00:06:35.080 17:23:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59548 00:06:35.080 17:23:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:35.080 17:23:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.080 17:23:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59548 00:06:35.080 killing process with pid 59548 00:06:35.080 17:23:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:35.080 17:23:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:35.080 17:23:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59548' 00:06:35.080 17:23:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59548 00:06:35.080 17:23:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59548 00:06:37.621 17:23:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:37.621 17:23:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:37.621 17:23:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59524 ]] 00:06:37.621 17:23:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59524 00:06:37.621 17:23:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59524 ']' 00:06:37.621 17:23:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59524 00:06:37.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59524) - No such process 00:06:37.621 Process with pid 59524 is not found 00:06:37.621 17:23:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59524 is not found' 00:06:37.621 17:23:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59548 ]] 00:06:37.621 17:23:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59548 00:06:37.621 17:23:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59548 ']' 00:06:37.621 Process with pid 59548 is not found 00:06:37.621 17:23:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59548 00:06:37.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59548) - No such process 00:06:37.621 17:23:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59548 is not found' 00:06:37.621 17:23:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:37.621 00:06:37.621 real 0m51.978s 00:06:37.621 user 1m29.658s 00:06:37.621 sys 0m7.230s 00:06:37.621 17:23:10 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.621 17:23:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.621 ************************************ 00:06:37.621 END TEST cpu_locks 00:06:37.621 ************************************ 00:06:37.621 00:06:37.621 real 1m24.145s 00:06:37.621 user 2m33.890s 00:06:37.621 sys 0m11.110s 00:06:37.621 ************************************ 00:06:37.621 END TEST event 00:06:37.621 ************************************ 00:06:37.621 17:23:10 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.621 17:23:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.621 17:23:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:37.621 17:23:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.621 17:23:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.621 17:23:10 -- common/autotest_common.sh@10 -- # set +x 00:06:37.621 ************************************ 00:06:37.621 START TEST thread 00:06:37.621 ************************************ 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:37.621 * Looking for test storage... 00:06:37.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.621 17:23:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.621 17:23:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.621 17:23:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.621 17:23:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.621 17:23:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.621 17:23:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.621 17:23:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.621 17:23:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.621 17:23:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.621 17:23:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.621 17:23:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.621 17:23:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:37.621 17:23:10 thread -- scripts/common.sh@345 -- # : 1 00:06:37.621 17:23:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.621 17:23:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.621 17:23:10 thread -- scripts/common.sh@365 -- # decimal 1 00:06:37.621 17:23:10 thread -- scripts/common.sh@353 -- # local d=1 00:06:37.621 17:23:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.621 17:23:10 thread -- scripts/common.sh@355 -- # echo 1 00:06:37.621 17:23:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.621 17:23:10 thread -- scripts/common.sh@366 -- # decimal 2 00:06:37.621 17:23:10 thread -- scripts/common.sh@353 -- # local d=2 00:06:37.621 17:23:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.621 17:23:10 thread -- scripts/common.sh@355 -- # echo 2 00:06:37.621 17:23:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.621 17:23:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.621 17:23:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.621 17:23:10 thread -- scripts/common.sh@368 -- # return 0 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.621 --rc genhtml_branch_coverage=1 00:06:37.621 --rc genhtml_function_coverage=1 00:06:37.621 --rc genhtml_legend=1 00:06:37.621 --rc geninfo_all_blocks=1 00:06:37.621 --rc geninfo_unexecuted_blocks=1 00:06:37.621 00:06:37.621 ' 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.621 --rc genhtml_branch_coverage=1 00:06:37.621 --rc genhtml_function_coverage=1 00:06:37.621 --rc genhtml_legend=1 00:06:37.621 --rc geninfo_all_blocks=1 00:06:37.621 --rc geninfo_unexecuted_blocks=1 00:06:37.621 00:06:37.621 ' 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.621 --rc genhtml_branch_coverage=1 00:06:37.621 --rc genhtml_function_coverage=1 00:06:37.621 --rc genhtml_legend=1 00:06:37.621 --rc geninfo_all_blocks=1 00:06:37.621 --rc geninfo_unexecuted_blocks=1 00:06:37.621 00:06:37.621 ' 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.621 --rc genhtml_branch_coverage=1 00:06:37.621 --rc genhtml_function_coverage=1 00:06:37.621 --rc genhtml_legend=1 00:06:37.621 --rc geninfo_all_blocks=1 00:06:37.621 --rc geninfo_unexecuted_blocks=1 00:06:37.621 00:06:37.621 ' 00:06:37.621 17:23:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.621 17:23:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.621 ************************************ 00:06:37.621 START TEST thread_poller_perf 00:06:37.621 ************************************ 00:06:37.621 17:23:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:37.621 [2024-12-07 17:23:10.860467] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:37.621 [2024-12-07 17:23:10.860579] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59754 ] 00:06:37.881 [2024-12-07 17:23:11.039178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.881 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:37.881 [2024-12-07 17:23:11.174949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.263 [2024-12-07T17:23:12.645Z] ====================================== 00:06:39.263 [2024-12-07T17:23:12.645Z] busy:2298887446 (cyc) 00:06:39.263 [2024-12-07T17:23:12.645Z] total_run_count: 405000 00:06:39.263 [2024-12-07T17:23:12.645Z] tsc_hz: 2290000000 (cyc) 00:06:39.263 [2024-12-07T17:23:12.645Z] ====================================== 00:06:39.263 [2024-12-07T17:23:12.645Z] poller_cost: 5676 (cyc), 2478 (nsec) 00:06:39.263 00:06:39.263 real 0m1.613s 00:06:39.263 user 0m1.392s 00:06:39.263 sys 0m0.113s 00:06:39.263 17:23:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.263 17:23:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.263 ************************************ 00:06:39.263 END TEST thread_poller_perf 00:06:39.263 ************************************ 00:06:39.263 17:23:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.263 17:23:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:39.263 17:23:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.263 17:23:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.263 ************************************ 00:06:39.263 START TEST thread_poller_perf 00:06:39.263 ************************************ 00:06:39.263 17:23:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.263 [2024-12-07 17:23:12.548080] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:39.263 [2024-12-07 17:23:12.548206] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59785 ] 00:06:39.523 [2024-12-07 17:23:12.726719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.523 [2024-12-07 17:23:12.870204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.523 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:40.905 [2024-12-07T17:23:14.287Z] ====================================== 00:06:40.905 [2024-12-07T17:23:14.287Z] busy:2294046552 (cyc) 00:06:40.905 [2024-12-07T17:23:14.287Z] total_run_count: 4906000 00:06:40.905 [2024-12-07T17:23:14.287Z] tsc_hz: 2290000000 (cyc) 00:06:40.905 [2024-12-07T17:23:14.287Z] ====================================== 00:06:40.905 [2024-12-07T17:23:14.287Z] poller_cost: 467 (cyc), 203 (nsec) 00:06:40.905 00:06:40.905 real 0m1.625s 00:06:40.905 user 0m1.389s 00:06:40.905 sys 0m0.127s 00:06:40.905 ************************************ 00:06:40.905 END TEST thread_poller_perf 00:06:40.905 ************************************ 00:06:40.905 17:23:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.905 17:23:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.905 17:23:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:40.905 00:06:40.905 real 0m3.605s 00:06:40.905 user 0m2.945s 00:06:40.905 sys 0m0.458s 00:06:40.905 17:23:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.905 17:23:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.905 ************************************ 00:06:40.905 END TEST thread 00:06:40.905 ************************************ 00:06:40.905 17:23:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:40.905 17:23:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:40.905 17:23:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.905 17:23:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.905 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:06:40.905 ************************************ 00:06:40.905 START TEST app_cmdline 00:06:40.905 ************************************ 00:06:40.905 17:23:14 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:41.166 * Looking for test storage... 00:06:41.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.166 17:23:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.166 --rc genhtml_branch_coverage=1 00:06:41.166 --rc genhtml_function_coverage=1 00:06:41.166 --rc genhtml_legend=1 00:06:41.166 --rc geninfo_all_blocks=1 00:06:41.166 --rc geninfo_unexecuted_blocks=1 00:06:41.166 00:06:41.166 ' 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.166 --rc genhtml_branch_coverage=1 00:06:41.166 --rc genhtml_function_coverage=1 00:06:41.166 --rc genhtml_legend=1 00:06:41.166 --rc geninfo_all_blocks=1 00:06:41.166 --rc geninfo_unexecuted_blocks=1 00:06:41.166 00:06:41.166 ' 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.166 --rc genhtml_branch_coverage=1 00:06:41.166 --rc genhtml_function_coverage=1 00:06:41.166 --rc genhtml_legend=1 00:06:41.166 --rc geninfo_all_blocks=1 00:06:41.166 --rc geninfo_unexecuted_blocks=1 00:06:41.166 00:06:41.166 ' 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.166 --rc genhtml_branch_coverage=1 00:06:41.166 --rc genhtml_function_coverage=1 00:06:41.166 --rc genhtml_legend=1 00:06:41.166 --rc geninfo_all_blocks=1 00:06:41.166 --rc geninfo_unexecuted_blocks=1 00:06:41.166 00:06:41.166 ' 00:06:41.166 17:23:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:41.166 17:23:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59874 00:06:41.166 17:23:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:41.166 17:23:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59874 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59874 ']' 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.166 17:23:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.427 [2024-12-07 17:23:14.574538] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:41.427 [2024-12-07 17:23:14.574743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59874 ] 00:06:41.427 [2024-12-07 17:23:14.747424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.686 [2024-12-07 17:23:14.861587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.630 17:23:15 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.630 17:23:15 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:42.630 17:23:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:42.630 { 00:06:42.630 "version": "SPDK v25.01-pre git sha1 a2f5e1c2d", 00:06:42.630 "fields": { 00:06:42.630 "major": 25, 00:06:42.630 "minor": 1, 00:06:42.630 "patch": 0, 00:06:42.630 "suffix": "-pre", 00:06:42.630 "commit": "a2f5e1c2d" 00:06:42.630 } 00:06:42.630 } 00:06:42.630 17:23:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:42.630 17:23:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:42.630 17:23:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:42.630 17:23:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:42.630 17:23:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:42.630 17:23:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:42.630 17:23:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:42.630 17:23:15 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.630 17:23:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.630 17:23:15 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.630 17:23:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:42.631 17:23:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:42.631 17:23:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.631 17:23:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:42.631 17:23:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.631 17:23:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:42.631 17:23:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.631 17:23:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:42.631 17:23:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.631 17:23:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:42.631 17:23:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.631 17:23:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:42.631 17:23:16 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:42.631 17:23:16 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.890 request: 00:06:42.890 { 00:06:42.890 "method": "env_dpdk_get_mem_stats", 00:06:42.890 "req_id": 1 00:06:42.890 } 00:06:42.890 Got JSON-RPC error response 00:06:42.890 response: 00:06:42.890 { 00:06:42.890 "code": -32601, 00:06:42.890 "message": "Method not found" 00:06:42.890 } 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.890 17:23:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59874 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59874 ']' 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59874 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59874 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59874' 00:06:42.890 killing process with pid 59874 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@973 -- # kill 59874 00:06:42.890 17:23:16 app_cmdline -- common/autotest_common.sh@978 -- # wait 59874 00:06:45.429 00:06:45.429 real 0m4.338s 00:06:45.429 user 0m4.531s 00:06:45.429 sys 0m0.608s 00:06:45.429 17:23:18 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.429 17:23:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.429 ************************************ 00:06:45.429 END TEST app_cmdline 00:06:45.429 ************************************ 00:06:45.429 17:23:18 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:45.429 17:23:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.429 17:23:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.429 17:23:18 -- common/autotest_common.sh@10 -- # set +x 00:06:45.429 ************************************ 00:06:45.429 START TEST version 00:06:45.429 ************************************ 00:06:45.429 17:23:18 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:45.429 * Looking for test storage... 00:06:45.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:45.429 17:23:18 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.429 17:23:18 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.429 17:23:18 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.688 17:23:18 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.688 17:23:18 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.688 17:23:18 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.688 17:23:18 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.688 17:23:18 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.688 17:23:18 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.688 17:23:18 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.688 17:23:18 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.688 17:23:18 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.688 17:23:18 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.688 17:23:18 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.688 17:23:18 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.688 17:23:18 version -- scripts/common.sh@344 -- # case "$op" in 00:06:45.688 17:23:18 version -- scripts/common.sh@345 -- # : 1 00:06:45.688 17:23:18 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.688 17:23:18 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.688 17:23:18 version -- scripts/common.sh@365 -- # decimal 1 00:06:45.689 17:23:18 version -- scripts/common.sh@353 -- # local d=1 00:06:45.689 17:23:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.689 17:23:18 version -- scripts/common.sh@355 -- # echo 1 00:06:45.689 17:23:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.689 17:23:18 version -- scripts/common.sh@366 -- # decimal 2 00:06:45.689 17:23:18 version -- scripts/common.sh@353 -- # local d=2 00:06:45.689 17:23:18 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.689 17:23:18 version -- scripts/common.sh@355 -- # echo 2 00:06:45.689 17:23:18 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.689 17:23:18 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.689 17:23:18 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.689 17:23:18 version -- scripts/common.sh@368 -- # return 0 00:06:45.689 17:23:18 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.689 17:23:18 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.689 --rc genhtml_branch_coverage=1 00:06:45.689 --rc genhtml_function_coverage=1 00:06:45.689 --rc genhtml_legend=1 00:06:45.689 --rc geninfo_all_blocks=1 00:06:45.689 --rc geninfo_unexecuted_blocks=1 00:06:45.689 00:06:45.689 ' 00:06:45.689 17:23:18 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.689 --rc genhtml_branch_coverage=1 00:06:45.689 --rc genhtml_function_coverage=1 00:06:45.689 --rc genhtml_legend=1 00:06:45.689 --rc geninfo_all_blocks=1 00:06:45.689 --rc geninfo_unexecuted_blocks=1 00:06:45.689 00:06:45.689 ' 00:06:45.689 17:23:18 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.689 --rc genhtml_branch_coverage=1 00:06:45.689 --rc genhtml_function_coverage=1 00:06:45.689 --rc genhtml_legend=1 00:06:45.689 --rc geninfo_all_blocks=1 00:06:45.689 --rc geninfo_unexecuted_blocks=1 00:06:45.689 00:06:45.689 ' 00:06:45.689 17:23:18 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.689 --rc genhtml_branch_coverage=1 00:06:45.689 --rc genhtml_function_coverage=1 00:06:45.689 --rc genhtml_legend=1 00:06:45.689 --rc geninfo_all_blocks=1 00:06:45.689 --rc geninfo_unexecuted_blocks=1 00:06:45.689 00:06:45.689 ' 00:06:45.689 17:23:18 version -- app/version.sh@17 -- # get_header_version major 00:06:45.689 17:23:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.689 17:23:18 version -- app/version.sh@14 -- # cut -f2 00:06:45.689 17:23:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.689 17:23:18 version -- app/version.sh@17 -- # major=25 00:06:45.689 17:23:18 version -- app/version.sh@18 -- # get_header_version minor 00:06:45.689 17:23:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.689 17:23:18 version -- app/version.sh@14 -- # cut -f2 00:06:45.689 17:23:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.689 17:23:18 version -- app/version.sh@18 -- # minor=1 00:06:45.689 17:23:18 version -- app/version.sh@19 -- # get_header_version patch 00:06:45.689 17:23:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.689 17:23:18 version -- app/version.sh@14 -- # cut -f2 00:06:45.689 17:23:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.689 17:23:18 version -- app/version.sh@19 -- # patch=0 00:06:45.689 17:23:18 version -- app/version.sh@20 -- # get_header_version suffix 00:06:45.689 17:23:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.689 17:23:18 version -- app/version.sh@14 -- # cut -f2 00:06:45.689 17:23:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.689 17:23:18 version -- app/version.sh@20 -- # suffix=-pre 00:06:45.689 17:23:18 version -- app/version.sh@22 -- # version=25.1 00:06:45.689 17:23:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:45.689 17:23:18 version -- app/version.sh@28 -- # version=25.1rc0 00:06:45.689 17:23:18 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:45.689 17:23:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:45.689 17:23:18 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:45.689 17:23:18 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:45.689 ************************************ 00:06:45.689 END TEST version 00:06:45.689 ************************************ 00:06:45.689 00:06:45.689 real 0m0.324s 00:06:45.689 user 0m0.191s 00:06:45.689 sys 0m0.192s 00:06:45.689 17:23:18 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.689 17:23:18 version -- common/autotest_common.sh@10 -- # set +x 00:06:45.689 17:23:19 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:45.689 17:23:19 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:45.689 17:23:19 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:45.689 17:23:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.689 17:23:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.689 17:23:19 -- common/autotest_common.sh@10 -- # set +x 00:06:45.689 ************************************ 00:06:45.689 START TEST bdev_raid 00:06:45.689 ************************************ 00:06:45.689 17:23:19 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:45.949 * Looking for test storage... 00:06:45.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:45.949 17:23:19 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.949 17:23:19 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.949 17:23:19 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.949 17:23:19 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.949 17:23:19 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:45.950 17:23:19 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:45.950 17:23:19 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.950 17:23:19 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:45.950 17:23:19 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.950 17:23:19 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.950 17:23:19 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.950 17:23:19 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:45.950 17:23:19 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.950 17:23:19 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.950 --rc genhtml_branch_coverage=1 00:06:45.950 --rc genhtml_function_coverage=1 00:06:45.950 --rc genhtml_legend=1 00:06:45.950 --rc geninfo_all_blocks=1 00:06:45.950 --rc geninfo_unexecuted_blocks=1 00:06:45.950 00:06:45.950 ' 00:06:45.950 17:23:19 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.950 --rc genhtml_branch_coverage=1 00:06:45.950 --rc genhtml_function_coverage=1 00:06:45.950 --rc genhtml_legend=1 00:06:45.950 --rc geninfo_all_blocks=1 00:06:45.950 --rc geninfo_unexecuted_blocks=1 00:06:45.950 00:06:45.950 ' 00:06:45.950 17:23:19 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.950 --rc genhtml_branch_coverage=1 00:06:45.950 --rc genhtml_function_coverage=1 00:06:45.950 --rc genhtml_legend=1 00:06:45.950 --rc geninfo_all_blocks=1 00:06:45.950 --rc geninfo_unexecuted_blocks=1 00:06:45.950 00:06:45.950 ' 00:06:45.950 17:23:19 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.950 --rc genhtml_branch_coverage=1 00:06:45.950 --rc genhtml_function_coverage=1 00:06:45.950 --rc genhtml_legend=1 00:06:45.950 --rc geninfo_all_blocks=1 00:06:45.950 --rc geninfo_unexecuted_blocks=1 00:06:45.950 00:06:45.950 ' 00:06:45.950 17:23:19 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:45.950 17:23:19 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:45.950 17:23:19 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:45.950 17:23:19 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:45.950 17:23:19 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:45.950 17:23:19 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:45.950 17:23:19 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:45.950 17:23:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.950 17:23:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.950 17:23:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.950 ************************************ 00:06:45.950 START TEST raid1_resize_data_offset_test 00:06:45.950 ************************************ 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60066 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60066' 00:06:45.950 Process raid pid: 60066 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60066 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60066 ']' 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.950 17:23:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.209 [2024-12-07 17:23:19.393433] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:46.209 [2024-12-07 17:23:19.393629] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.209 [2024-12-07 17:23:19.572833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.468 [2024-12-07 17:23:19.692675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.726 [2024-12-07 17:23:19.894623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.726 [2024-12-07 17:23:19.894749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.985 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.985 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:46.985 17:23:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:46.985 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.985 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.985 malloc0 00:06:46.985 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.985 17:23:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:46.985 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.985 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.245 malloc1 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.245 null0 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.245 [2024-12-07 17:23:20.452076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:47.245 [2024-12-07 17:23:20.453880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:47.245 [2024-12-07 17:23:20.453946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:47.245 [2024-12-07 17:23:20.454114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:47.245 [2024-12-07 17:23:20.454128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:47.245 [2024-12-07 17:23:20.454409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:47.245 [2024-12-07 17:23:20.454591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:47.245 [2024-12-07 17:23:20.454605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:47.245 [2024-12-07 17:23:20.454746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.245 [2024-12-07 17:23:20.515948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.245 17:23:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.818 malloc2 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.818 [2024-12-07 17:23:21.061850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:47.818 [2024-12-07 17:23:21.079714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.818 [2024-12-07 17:23:21.081590] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60066 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60066 ']' 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60066 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60066 00:06:47.818 killing process with pid 60066 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60066' 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60066 00:06:47.818 [2024-12-07 17:23:21.174413] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.818 17:23:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60066 00:06:47.818 [2024-12-07 17:23:21.174573] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:47.818 [2024-12-07 17:23:21.174634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.818 [2024-12-07 17:23:21.174656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:48.078 [2024-12-07 17:23:21.211619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.079 [2024-12-07 17:23:21.212107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.079 [2024-12-07 17:23:21.212135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:49.986 [2024-12-07 17:23:22.979356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.926 ************************************ 00:06:50.926 END TEST raid1_resize_data_offset_test 00:06:50.926 ************************************ 00:06:50.926 17:23:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:50.926 00:06:50.926 real 0m4.801s 00:06:50.926 user 0m4.728s 00:06:50.926 sys 0m0.564s 00:06:50.926 17:23:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.926 17:23:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.926 17:23:24 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:50.926 17:23:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.926 17:23:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.926 17:23:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.926 ************************************ 00:06:50.926 START TEST raid0_resize_superblock_test 00:06:50.926 ************************************ 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60153 00:06:50.926 Process raid pid: 60153 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60153' 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60153 00:06:50.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60153 ']' 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.926 17:23:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.926 [2024-12-07 17:23:24.266709] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:50.926 [2024-12-07 17:23:24.266824] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.185 [2024-12-07 17:23:24.440049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.185 [2024-12-07 17:23:24.556244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.444 [2024-12-07 17:23:24.769322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.444 [2024-12-07 17:23:24.769370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.011 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.011 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:52.011 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:52.011 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.011 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.579 malloc0 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.579 [2024-12-07 17:23:25.703152] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:52.579 [2024-12-07 17:23:25.703287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.579 [2024-12-07 17:23:25.703341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:52.579 [2024-12-07 17:23:25.703393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.579 [2024-12-07 17:23:25.706030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.579 [2024-12-07 17:23:25.706119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:52.579 pt0 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.579 c807dc3b-eb07-4eff-adba-1c3342db9713 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.579 9870c93a-5953-46b8-8c43-75141c9b8091 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.579 44467c03-714f-4083-be6d-f6e5d8a0435e 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.579 [2024-12-07 17:23:25.838802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9870c93a-5953-46b8-8c43-75141c9b8091 is claimed 00:06:52.579 [2024-12-07 17:23:25.838893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 44467c03-714f-4083-be6d-f6e5d8a0435e is claimed 00:06:52.579 [2024-12-07 17:23:25.839076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:52.579 [2024-12-07 17:23:25.839097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:52.579 [2024-12-07 17:23:25.839389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:52.579 [2024-12-07 17:23:25.839613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:52.579 [2024-12-07 17:23:25.839626] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:52.579 [2024-12-07 17:23:25.839799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.579 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.579 [2024-12-07 17:23:25.950864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.839 [2024-12-07 17:23:25.978797] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.839 [2024-12-07 17:23:25.978829] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9870c93a-5953-46b8-8c43-75141c9b8091' was resized: old size 131072, new size 204800 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.839 [2024-12-07 17:23:25.990660] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.839 [2024-12-07 17:23:25.990685] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '44467c03-714f-4083-be6d-f6e5d8a0435e' was resized: old size 131072, new size 204800 00:06:52.839 [2024-12-07 17:23:25.990715] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.839 17:23:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.839 [2024-12-07 17:23:26.102617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.839 [2024-12-07 17:23:26.146286] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:52.839 [2024-12-07 17:23:26.146361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:52.839 [2024-12-07 17:23:26.146378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.839 [2024-12-07 17:23:26.146393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:52.839 [2024-12-07 17:23:26.146505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.839 [2024-12-07 17:23:26.146541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.839 [2024-12-07 17:23:26.146554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.839 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.839 [2024-12-07 17:23:26.158200] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:52.839 [2024-12-07 17:23:26.158253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.839 [2024-12-07 17:23:26.158276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:52.839 [2024-12-07 17:23:26.158289] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.839 [2024-12-07 17:23:26.160796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.839 [2024-12-07 17:23:26.160912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:52.839 [2024-12-07 17:23:26.162977] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9870c93a-5953-46b8-8c43-75141c9b8091 00:06:52.839 [2024-12-07 17:23:26.163099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9870c93a-5953-46b8-8c43-75141c9b8091 is claimed 00:06:52.839 [2024-12-07 17:23:26.163262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 44467c03-714f-4083-be6d-f6e5d8a0435e 00:06:52.839 [2024-12-07 17:23:26.163286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 44467c03-714f-4083-be6d-f6e5d8a0435e is claimed 00:06:52.839 [2024-12-07 17:23:26.163483] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 44467c03-714f-4083-be6d-f6e5d8a0435e (2) smaller than existing raid bdev Raid (3) 00:06:52.839 [2024-12-07 17:23:26.163518] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9870c93a-5953-46b8-8c43-75141c9b8091: File exists 00:06:52.840 [2024-12-07 17:23:26.163562] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:52.840 [2024-12-07 17:23:26.163576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:52.840 pt0 00:06:52.840 [2024-12-07 17:23:26.163867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:52.840 [2024-12-07 17:23:26.164066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:52.840 [2024-12-07 17:23:26.164083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:52.840 [2024-12-07 17:23:26.164261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.840 [2024-12-07 17:23:26.187246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.840 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60153 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60153 ']' 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60153 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60153 00:06:53.099 killing process with pid 60153 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60153' 00:06:53.099 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60153 00:06:53.099 [2024-12-07 17:23:26.269669] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.100 [2024-12-07 17:23:26.269740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.100 [2024-12-07 17:23:26.269785] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.100 [2024-12-07 17:23:26.269794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:53.100 17:23:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60153 00:06:55.007 [2024-12-07 17:23:27.899049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.945 17:23:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:55.945 00:06:55.945 real 0m5.047s 00:06:55.945 user 0m5.226s 00:06:55.945 sys 0m0.613s 00:06:55.945 17:23:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.945 17:23:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.945 ************************************ 00:06:55.945 END TEST raid0_resize_superblock_test 00:06:55.945 ************************************ 00:06:55.945 17:23:29 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:55.945 17:23:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.945 17:23:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.945 17:23:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.945 ************************************ 00:06:55.945 START TEST raid1_resize_superblock_test 00:06:55.945 ************************************ 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60257 00:06:55.945 Process raid pid: 60257 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60257' 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60257 00:06:55.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60257 ']' 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.945 17:23:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.205 [2024-12-07 17:23:29.378690] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:56.205 [2024-12-07 17:23:29.378890] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.205 [2024-12-07 17:23:29.553890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.465 [2024-12-07 17:23:29.673760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.724 [2024-12-07 17:23:29.888546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.724 [2024-12-07 17:23:29.888671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.984 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.984 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:56.984 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:56.984 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.984 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.555 malloc0 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.555 [2024-12-07 17:23:30.774396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:57.555 [2024-12-07 17:23:30.774459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.555 [2024-12-07 17:23:30.774483] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:57.555 [2024-12-07 17:23:30.774495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.555 [2024-12-07 17:23:30.776784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.555 [2024-12-07 17:23:30.776829] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:57.555 pt0 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.555 be84dda3-f5d9-408a-b392-dc60ec50188d 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.555 c4d84ab6-3423-4b99-8a35-b8dd8b32edc8 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.555 e87a7b1d-626e-4edb-b713-24ec39d2d518 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.555 [2024-12-07 17:23:30.909221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c4d84ab6-3423-4b99-8a35-b8dd8b32edc8 is claimed 00:06:57.555 [2024-12-07 17:23:30.909334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e87a7b1d-626e-4edb-b713-24ec39d2d518 is claimed 00:06:57.555 [2024-12-07 17:23:30.909490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:57.555 [2024-12-07 17:23:30.909507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:57.555 [2024-12-07 17:23:30.909790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:57.555 [2024-12-07 17:23:30.910042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:57.555 [2024-12-07 17:23:30.910056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:57.555 [2024-12-07 17:23:30.910257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.555 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.815 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:57.815 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:57.815 17:23:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:57.815 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.815 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 17:23:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 [2024-12-07 17:23:31.025515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 [2024-12-07 17:23:31.069278] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.815 [2024-12-07 17:23:31.069369] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c4d84ab6-3423-4b99-8a35-b8dd8b32edc8' was resized: old size 131072, new size 204800 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 [2024-12-07 17:23:31.081174] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.815 [2024-12-07 17:23:31.081206] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e87a7b1d-626e-4edb-b713-24ec39d2d518' was resized: old size 131072, new size 204800 00:06:57.815 [2024-12-07 17:23:31.081240] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:57.815 [2024-12-07 17:23:31.169133] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.815 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.075 [2024-12-07 17:23:31.224787] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:58.075 [2024-12-07 17:23:31.224976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:58.075 [2024-12-07 17:23:31.225036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:58.075 [2024-12-07 17:23:31.225225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:58.075 [2024-12-07 17:23:31.225483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.075 [2024-12-07 17:23:31.225606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.075 [2024-12-07 17:23:31.225659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.075 [2024-12-07 17:23:31.232638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:58.075 [2024-12-07 17:23:31.232739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.075 [2024-12-07 17:23:31.232775] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:58.075 [2024-12-07 17:23:31.232815] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.075 [2024-12-07 17:23:31.235321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.075 [2024-12-07 17:23:31.235435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:58.075 [2024-12-07 17:23:31.237740] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c4d84ab6-3423-4b99-8a35-b8dd8b32edc8 00:06:58.075 [2024-12-07 17:23:31.237892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c4d84ab6-3423-4b99-8a35-b8dd8b32edc8 is claimed 00:06:58.075 pt0 00:06:58.075 [2024-12-07 17:23:31.238133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e87a7b1d-626e-4edb-b713-24ec39d2d518 00:06:58.075 [2024-12-07 17:23:31.238160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e87a7b1d-626e-4edb-b713-24ec39d2d518 is claimed 00:06:58.075 [2024-12-07 17:23:31.238299] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e87a7b1d-626e-4edb-b713-24ec39d2d518 (2) smaller than existing raid bdev Raid (3) 00:06:58.075 [2024-12-07 17:23:31.238389] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev c4d84ab6-3423-4b99-8a35-b8dd8b32edc8: File exists 00:06:58.075 [2024-12-07 17:23:31.238488] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:58.075 [2024-12-07 17:23:31.238531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.075 [2024-12-07 17:23:31.238855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:58.075 [2024-12-07 17:23:31.239113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:58.075 [2024-12-07 17:23:31.239172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.075 [2024-12-07 17:23:31.239414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.075 [2024-12-07 17:23:31.260975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60257 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60257 ']' 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60257 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60257 00:06:58.075 killing process with pid 60257 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60257' 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60257 00:06:58.075 [2024-12-07 17:23:31.332428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.075 [2024-12-07 17:23:31.332556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.075 17:23:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60257 00:06:58.075 [2024-12-07 17:23:31.332633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.075 [2024-12-07 17:23:31.332645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:59.461 [2024-12-07 17:23:32.838600] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.840 ************************************ 00:07:00.840 END TEST raid1_resize_superblock_test 00:07:00.840 ************************************ 00:07:00.840 17:23:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:00.840 00:07:00.840 real 0m4.768s 00:07:00.840 user 0m4.970s 00:07:00.840 sys 0m0.560s 00:07:00.840 17:23:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.840 17:23:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.840 17:23:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:00.840 17:23:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:00.840 17:23:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:00.840 17:23:34 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:00.840 17:23:34 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:00.840 17:23:34 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:00.840 17:23:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.840 17:23:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.840 17:23:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.840 ************************************ 00:07:00.840 START TEST raid_function_test_raid0 00:07:00.840 ************************************ 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:00.840 Process raid pid: 60354 00:07:00.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60354 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60354' 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60354 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60354 ']' 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.840 17:23:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:00.840 [2024-12-07 17:23:34.219550] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:01.099 [2024-12-07 17:23:34.219805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.099 [2024-12-07 17:23:34.398282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.358 [2024-12-07 17:23:34.538837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.617 [2024-12-07 17:23:34.763668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.617 [2024-12-07 17:23:34.763732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:01.875 Base_1 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:01.875 Base_2 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:01.875 [2024-12-07 17:23:35.229118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:01.875 [2024-12-07 17:23:35.231206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:01.875 [2024-12-07 17:23:35.231304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:01.875 [2024-12-07 17:23:35.231335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:01.875 [2024-12-07 17:23:35.231648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:01.875 [2024-12-07 17:23:35.231845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:01.875 [2024-12-07 17:23:35.231863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:01.875 [2024-12-07 17:23:35.232071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:01.875 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:02.133 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:02.390 [2024-12-07 17:23:35.532710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:02.390 /dev/nbd0 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.390 1+0 records in 00:07:02.390 1+0 records out 00:07:02.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428431 s, 9.6 MB/s 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:02.390 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.649 { 00:07:02.649 "nbd_device": "/dev/nbd0", 00:07:02.649 "bdev_name": "raid" 00:07:02.649 } 00:07:02.649 ]' 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.649 { 00:07:02.649 "nbd_device": "/dev/nbd0", 00:07:02.649 "bdev_name": "raid" 00:07:02.649 } 00:07:02.649 ]' 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:02.649 4096+0 records in 00:07:02.649 4096+0 records out 00:07:02.649 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0357413 s, 58.7 MB/s 00:07:02.649 17:23:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:02.909 4096+0 records in 00:07:02.909 4096+0 records out 00:07:02.909 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.199691 s, 10.5 MB/s 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:02.909 128+0 records in 00:07:02.909 128+0 records out 00:07:02.909 65536 bytes (66 kB, 64 KiB) copied, 0.00119688 s, 54.8 MB/s 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:02.909 2035+0 records in 00:07:02.909 2035+0 records out 00:07:02.909 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0156509 s, 66.6 MB/s 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:02.909 456+0 records in 00:07:02.909 456+0 records out 00:07:02.909 233472 bytes (233 kB, 228 KiB) copied, 0.00294473 s, 79.3 MB/s 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.909 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:02.910 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.910 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.169 [2024-12-07 17:23:36.486433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.169 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60354 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60354 ']' 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60354 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.429 17:23:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60354 00:07:03.687 killing process with pid 60354 00:07:03.687 17:23:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.687 17:23:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.687 17:23:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60354' 00:07:03.687 17:23:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60354 00:07:03.687 [2024-12-07 17:23:36.814915] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.687 17:23:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60354 00:07:03.687 [2024-12-07 17:23:36.815054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.687 [2024-12-07 17:23:36.815106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.687 [2024-12-07 17:23:36.815120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:03.687 [2024-12-07 17:23:37.025493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.059 17:23:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:05.059 00:07:05.059 real 0m4.009s 00:07:05.059 user 0m4.751s 00:07:05.059 sys 0m0.986s 00:07:05.059 17:23:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.059 17:23:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:05.059 ************************************ 00:07:05.059 END TEST raid_function_test_raid0 00:07:05.059 ************************************ 00:07:05.059 17:23:38 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:05.059 17:23:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.059 17:23:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.059 17:23:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.060 ************************************ 00:07:05.060 START TEST raid_function_test_concat 00:07:05.060 ************************************ 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60483 00:07:05.060 Process raid pid: 60483 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60483' 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60483 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60483 ']' 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.060 17:23:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.060 [2024-12-07 17:23:38.293420] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:05.060 [2024-12-07 17:23:38.293553] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.317 [2024-12-07 17:23:38.469577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.317 [2024-12-07 17:23:38.586158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.575 [2024-12-07 17:23:38.790649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.575 [2024-12-07 17:23:38.790693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.835 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.835 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:05.835 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:05.835 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.835 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.835 Base_1 00:07:05.835 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.835 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:05.835 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.835 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.835 Base_2 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:06.094 [2024-12-07 17:23:39.221869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:06.094 [2024-12-07 17:23:39.223742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:06.094 [2024-12-07 17:23:39.223833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:06.094 [2024-12-07 17:23:39.223845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:06.094 [2024-12-07 17:23:39.224141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:06.094 [2024-12-07 17:23:39.224314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:06.094 [2024-12-07 17:23:39.224330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:06.094 [2024-12-07 17:23:39.224512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:06.094 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:06.355 [2024-12-07 17:23:39.481482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:06.355 /dev/nbd0 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.355 1+0 records in 00:07:06.355 1+0 records out 00:07:06.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436067 s, 9.4 MB/s 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:06.355 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.640 { 00:07:06.640 "nbd_device": "/dev/nbd0", 00:07:06.640 "bdev_name": "raid" 00:07:06.640 } 00:07:06.640 ]' 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.640 { 00:07:06.640 "nbd_device": "/dev/nbd0", 00:07:06.640 "bdev_name": "raid" 00:07:06.640 } 00:07:06.640 ]' 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:06.640 4096+0 records in 00:07:06.640 4096+0 records out 00:07:06.640 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0235981 s, 88.9 MB/s 00:07:06.640 17:23:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:06.897 4096+0 records in 00:07:06.897 4096+0 records out 00:07:06.897 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.221342 s, 9.5 MB/s 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:06.897 128+0 records in 00:07:06.897 128+0 records out 00:07:06.897 65536 bytes (66 kB, 64 KiB) copied, 0.000717839 s, 91.3 MB/s 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:06.897 2035+0 records in 00:07:06.897 2035+0 records out 00:07:06.897 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00618169 s, 169 MB/s 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:06.897 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:06.898 456+0 records in 00:07:06.898 456+0 records out 00:07:06.898 233472 bytes (233 kB, 228 KiB) copied, 0.00177857 s, 131 MB/s 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.898 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.155 [2024-12-07 17:23:40.416452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.155 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60483 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60483 ']' 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60483 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60483 00:07:07.410 killing process with pid 60483 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60483' 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60483 00:07:07.410 17:23:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60483 00:07:07.410 [2024-12-07 17:23:40.748840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.410 [2024-12-07 17:23:40.748982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.410 [2024-12-07 17:23:40.749064] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.410 [2024-12-07 17:23:40.749080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:07.669 [2024-12-07 17:23:40.998748] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.044 ************************************ 00:07:09.044 END TEST raid_function_test_concat 00:07:09.044 ************************************ 00:07:09.044 17:23:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:09.044 00:07:09.044 real 0m3.903s 00:07:09.044 user 0m4.631s 00:07:09.044 sys 0m0.833s 00:07:09.044 17:23:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.044 17:23:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.044 17:23:42 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:09.044 17:23:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.044 17:23:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.044 17:23:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.044 ************************************ 00:07:09.044 START TEST raid0_resize_test 00:07:09.044 ************************************ 00:07:09.044 17:23:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:09.044 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:09.044 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:09.044 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:09.044 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:09.044 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:09.044 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:09.044 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60612 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60612' 00:07:09.045 Process raid pid: 60612 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60612 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60612 ']' 00:07:09.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.045 17:23:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.045 [2024-12-07 17:23:42.264080] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:09.045 [2024-12-07 17:23:42.264196] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.045 [2024-12-07 17:23:42.423849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.304 [2024-12-07 17:23:42.546528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.563 [2024-12-07 17:23:42.744594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.563 [2024-12-07 17:23:42.744633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.823 Base_1 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.823 Base_2 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.823 [2024-12-07 17:23:43.120411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:09.823 [2024-12-07 17:23:43.122220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:09.823 [2024-12-07 17:23:43.122314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:09.823 [2024-12-07 17:23:43.122355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:09.823 [2024-12-07 17:23:43.122604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:09.823 [2024-12-07 17:23:43.122751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:09.823 [2024-12-07 17:23:43.122784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:09.823 [2024-12-07 17:23:43.122974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.823 [2024-12-07 17:23:43.132373] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:09.823 [2024-12-07 17:23:43.132435] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:09.823 true 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.823 [2024-12-07 17:23:43.148559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.823 [2024-12-07 17:23:43.188329] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:09.823 [2024-12-07 17:23:43.188359] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:09.823 [2024-12-07 17:23:43.188393] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:09.823 true 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:09.823 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.823 [2024-12-07 17:23:43.200439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60612 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60612 ']' 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60612 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60612 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60612' 00:07:10.084 killing process with pid 60612 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60612 00:07:10.084 [2024-12-07 17:23:43.271990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.084 [2024-12-07 17:23:43.272163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.084 17:23:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60612 00:07:10.084 [2024-12-07 17:23:43.272249] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.084 [2024-12-07 17:23:43.272261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:10.084 [2024-12-07 17:23:43.289701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.023 17:23:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:11.023 00:07:11.023 real 0m2.225s 00:07:11.023 user 0m2.370s 00:07:11.023 sys 0m0.315s 00:07:11.023 17:23:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.023 17:23:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.023 ************************************ 00:07:11.023 END TEST raid0_resize_test 00:07:11.023 ************************************ 00:07:11.283 17:23:44 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:11.283 17:23:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.283 17:23:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.283 17:23:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.283 ************************************ 00:07:11.283 START TEST raid1_resize_test 00:07:11.283 ************************************ 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60668 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60668' 00:07:11.283 Process raid pid: 60668 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60668 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60668 ']' 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.283 17:23:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.283 [2024-12-07 17:23:44.557680] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:11.283 [2024-12-07 17:23:44.557875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.543 [2024-12-07 17:23:44.736170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.543 [2024-12-07 17:23:44.847548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.801 [2024-12-07 17:23:45.059403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.801 [2024-12-07 17:23:45.059498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.060 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.060 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:12.060 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:12.060 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.060 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.060 Base_1 00:07:12.060 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.060 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:12.060 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.060 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.319 Base_2 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.319 [2024-12-07 17:23:45.450900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:12.319 [2024-12-07 17:23:45.453049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:12.319 [2024-12-07 17:23:45.453148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:12.319 [2024-12-07 17:23:45.453216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:12.319 [2024-12-07 17:23:45.453533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:12.319 [2024-12-07 17:23:45.453722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:12.319 [2024-12-07 17:23:45.453766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:12.319 [2024-12-07 17:23:45.453996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.319 [2024-12-07 17:23:45.462862] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.319 [2024-12-07 17:23:45.462955] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:12.319 true 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.319 [2024-12-07 17:23:45.479118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.319 [2024-12-07 17:23:45.526779] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.319 [2024-12-07 17:23:45.526856] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:12.319 [2024-12-07 17:23:45.526946] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:12.319 true 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.319 [2024-12-07 17:23:45.542945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:12.319 17:23:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60668 00:07:12.320 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60668 ']' 00:07:12.320 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60668 00:07:12.320 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:12.320 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.320 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60668 00:07:12.320 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.320 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.320 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60668' 00:07:12.320 killing process with pid 60668 00:07:12.320 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60668 00:07:12.320 [2024-12-07 17:23:45.617440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.320 [2024-12-07 17:23:45.617614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.320 17:23:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60668 00:07:12.320 [2024-12-07 17:23:45.618258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.320 [2024-12-07 17:23:45.618341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:12.320 [2024-12-07 17:23:45.638791] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.697 17:23:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:13.697 00:07:13.697 real 0m2.397s 00:07:13.697 user 0m2.591s 00:07:13.697 sys 0m0.310s 00:07:13.697 17:23:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.697 17:23:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.697 ************************************ 00:07:13.697 END TEST raid1_resize_test 00:07:13.697 ************************************ 00:07:13.697 17:23:46 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:13.697 17:23:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:13.697 17:23:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:13.697 17:23:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.697 17:23:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.697 17:23:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.697 ************************************ 00:07:13.697 START TEST raid_state_function_test 00:07:13.697 ************************************ 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60731 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60731' 00:07:13.697 Process raid pid: 60731 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60731 00:07:13.697 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60731 ']' 00:07:13.698 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.698 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.698 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.698 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.698 17:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.698 [2024-12-07 17:23:47.007793] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:13.698 [2024-12-07 17:23:47.008030] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.956 [2024-12-07 17:23:47.176327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.956 [2024-12-07 17:23:47.311283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.216 [2024-12-07 17:23:47.551205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.216 [2024-12-07 17:23:47.551244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.784 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.785 [2024-12-07 17:23:47.902159] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.785 [2024-12-07 17:23:47.902216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.785 [2024-12-07 17:23:47.902227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.785 [2024-12-07 17:23:47.902236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.785 "name": "Existed_Raid", 00:07:14.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.785 "strip_size_kb": 64, 00:07:14.785 "state": "configuring", 00:07:14.785 "raid_level": "raid0", 00:07:14.785 "superblock": false, 00:07:14.785 "num_base_bdevs": 2, 00:07:14.785 "num_base_bdevs_discovered": 0, 00:07:14.785 "num_base_bdevs_operational": 2, 00:07:14.785 "base_bdevs_list": [ 00:07:14.785 { 00:07:14.785 "name": "BaseBdev1", 00:07:14.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.785 "is_configured": false, 00:07:14.785 "data_offset": 0, 00:07:14.785 "data_size": 0 00:07:14.785 }, 00:07:14.785 { 00:07:14.785 "name": "BaseBdev2", 00:07:14.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.785 "is_configured": false, 00:07:14.785 "data_offset": 0, 00:07:14.785 "data_size": 0 00:07:14.785 } 00:07:14.785 ] 00:07:14.785 }' 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.785 17:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.045 [2024-12-07 17:23:48.325414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.045 [2024-12-07 17:23:48.325518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.045 [2024-12-07 17:23:48.337361] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.045 [2024-12-07 17:23:48.337441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.045 [2024-12-07 17:23:48.337470] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.045 [2024-12-07 17:23:48.337495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.045 [2024-12-07 17:23:48.383884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.045 BaseBdev1 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.045 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.045 [ 00:07:15.045 { 00:07:15.045 "name": "BaseBdev1", 00:07:15.045 "aliases": [ 00:07:15.045 "32c892db-ab9e-4e13-a9c8-73494baa0f73" 00:07:15.045 ], 00:07:15.045 "product_name": "Malloc disk", 00:07:15.045 "block_size": 512, 00:07:15.045 "num_blocks": 65536, 00:07:15.045 "uuid": "32c892db-ab9e-4e13-a9c8-73494baa0f73", 00:07:15.045 "assigned_rate_limits": { 00:07:15.045 "rw_ios_per_sec": 0, 00:07:15.045 "rw_mbytes_per_sec": 0, 00:07:15.045 "r_mbytes_per_sec": 0, 00:07:15.045 "w_mbytes_per_sec": 0 00:07:15.045 }, 00:07:15.045 "claimed": true, 00:07:15.045 "claim_type": "exclusive_write", 00:07:15.045 "zoned": false, 00:07:15.045 "supported_io_types": { 00:07:15.045 "read": true, 00:07:15.045 "write": true, 00:07:15.045 "unmap": true, 00:07:15.045 "flush": true, 00:07:15.045 "reset": true, 00:07:15.045 "nvme_admin": false, 00:07:15.046 "nvme_io": false, 00:07:15.046 "nvme_io_md": false, 00:07:15.046 "write_zeroes": true, 00:07:15.046 "zcopy": true, 00:07:15.046 "get_zone_info": false, 00:07:15.046 "zone_management": false, 00:07:15.046 "zone_append": false, 00:07:15.046 "compare": false, 00:07:15.046 "compare_and_write": false, 00:07:15.046 "abort": true, 00:07:15.046 "seek_hole": false, 00:07:15.046 "seek_data": false, 00:07:15.046 "copy": true, 00:07:15.046 "nvme_iov_md": false 00:07:15.046 }, 00:07:15.046 "memory_domains": [ 00:07:15.046 { 00:07:15.046 "dma_device_id": "system", 00:07:15.046 "dma_device_type": 1 00:07:15.046 }, 00:07:15.046 { 00:07:15.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.046 "dma_device_type": 2 00:07:15.046 } 00:07:15.046 ], 00:07:15.046 "driver_specific": {} 00:07:15.046 } 00:07:15.046 ] 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.046 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.306 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.306 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.306 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.306 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.306 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.306 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.306 "name": "Existed_Raid", 00:07:15.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.306 "strip_size_kb": 64, 00:07:15.306 "state": "configuring", 00:07:15.306 "raid_level": "raid0", 00:07:15.306 "superblock": false, 00:07:15.306 "num_base_bdevs": 2, 00:07:15.306 "num_base_bdevs_discovered": 1, 00:07:15.306 "num_base_bdevs_operational": 2, 00:07:15.306 "base_bdevs_list": [ 00:07:15.306 { 00:07:15.306 "name": "BaseBdev1", 00:07:15.306 "uuid": "32c892db-ab9e-4e13-a9c8-73494baa0f73", 00:07:15.306 "is_configured": true, 00:07:15.306 "data_offset": 0, 00:07:15.306 "data_size": 65536 00:07:15.306 }, 00:07:15.306 { 00:07:15.306 "name": "BaseBdev2", 00:07:15.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.306 "is_configured": false, 00:07:15.306 "data_offset": 0, 00:07:15.306 "data_size": 0 00:07:15.306 } 00:07:15.306 ] 00:07:15.306 }' 00:07:15.306 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.306 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.566 [2024-12-07 17:23:48.903140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.566 [2024-12-07 17:23:48.903267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.566 [2024-12-07 17:23:48.911160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.566 [2024-12-07 17:23:48.912983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.566 [2024-12-07 17:23:48.913067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.566 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.826 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.826 "name": "Existed_Raid", 00:07:15.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.826 "strip_size_kb": 64, 00:07:15.826 "state": "configuring", 00:07:15.826 "raid_level": "raid0", 00:07:15.826 "superblock": false, 00:07:15.826 "num_base_bdevs": 2, 00:07:15.826 "num_base_bdevs_discovered": 1, 00:07:15.826 "num_base_bdevs_operational": 2, 00:07:15.826 "base_bdevs_list": [ 00:07:15.826 { 00:07:15.826 "name": "BaseBdev1", 00:07:15.826 "uuid": "32c892db-ab9e-4e13-a9c8-73494baa0f73", 00:07:15.826 "is_configured": true, 00:07:15.826 "data_offset": 0, 00:07:15.826 "data_size": 65536 00:07:15.826 }, 00:07:15.826 { 00:07:15.826 "name": "BaseBdev2", 00:07:15.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.826 "is_configured": false, 00:07:15.826 "data_offset": 0, 00:07:15.826 "data_size": 0 00:07:15.826 } 00:07:15.826 ] 00:07:15.826 }' 00:07:15.826 17:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.826 17:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 [2024-12-07 17:23:49.395242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.085 [2024-12-07 17:23:49.395360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:16.085 [2024-12-07 17:23:49.395389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:16.085 [2024-12-07 17:23:49.395682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:16.085 [2024-12-07 17:23:49.395906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:16.085 [2024-12-07 17:23:49.395991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:16.085 [2024-12-07 17:23:49.396325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.085 BaseBdev2 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.085 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.086 [ 00:07:16.086 { 00:07:16.086 "name": "BaseBdev2", 00:07:16.086 "aliases": [ 00:07:16.086 "e4648853-0d35-4541-ab93-fc59d4fa0f46" 00:07:16.086 ], 00:07:16.086 "product_name": "Malloc disk", 00:07:16.086 "block_size": 512, 00:07:16.086 "num_blocks": 65536, 00:07:16.086 "uuid": "e4648853-0d35-4541-ab93-fc59d4fa0f46", 00:07:16.086 "assigned_rate_limits": { 00:07:16.086 "rw_ios_per_sec": 0, 00:07:16.086 "rw_mbytes_per_sec": 0, 00:07:16.086 "r_mbytes_per_sec": 0, 00:07:16.086 "w_mbytes_per_sec": 0 00:07:16.086 }, 00:07:16.086 "claimed": true, 00:07:16.086 "claim_type": "exclusive_write", 00:07:16.086 "zoned": false, 00:07:16.086 "supported_io_types": { 00:07:16.086 "read": true, 00:07:16.086 "write": true, 00:07:16.086 "unmap": true, 00:07:16.086 "flush": true, 00:07:16.086 "reset": true, 00:07:16.086 "nvme_admin": false, 00:07:16.086 "nvme_io": false, 00:07:16.086 "nvme_io_md": false, 00:07:16.086 "write_zeroes": true, 00:07:16.086 "zcopy": true, 00:07:16.086 "get_zone_info": false, 00:07:16.086 "zone_management": false, 00:07:16.086 "zone_append": false, 00:07:16.086 "compare": false, 00:07:16.086 "compare_and_write": false, 00:07:16.086 "abort": true, 00:07:16.086 "seek_hole": false, 00:07:16.086 "seek_data": false, 00:07:16.086 "copy": true, 00:07:16.086 "nvme_iov_md": false 00:07:16.086 }, 00:07:16.086 "memory_domains": [ 00:07:16.086 { 00:07:16.086 "dma_device_id": "system", 00:07:16.086 "dma_device_type": 1 00:07:16.086 }, 00:07:16.086 { 00:07:16.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.086 "dma_device_type": 2 00:07:16.086 } 00:07:16.086 ], 00:07:16.086 "driver_specific": {} 00:07:16.086 } 00:07:16.086 ] 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.086 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.345 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.345 "name": "Existed_Raid", 00:07:16.345 "uuid": "f53dabac-1ee2-424f-8445-31714a7e79d6", 00:07:16.345 "strip_size_kb": 64, 00:07:16.345 "state": "online", 00:07:16.345 "raid_level": "raid0", 00:07:16.345 "superblock": false, 00:07:16.345 "num_base_bdevs": 2, 00:07:16.345 "num_base_bdevs_discovered": 2, 00:07:16.345 "num_base_bdevs_operational": 2, 00:07:16.345 "base_bdevs_list": [ 00:07:16.345 { 00:07:16.345 "name": "BaseBdev1", 00:07:16.345 "uuid": "32c892db-ab9e-4e13-a9c8-73494baa0f73", 00:07:16.345 "is_configured": true, 00:07:16.345 "data_offset": 0, 00:07:16.345 "data_size": 65536 00:07:16.345 }, 00:07:16.345 { 00:07:16.345 "name": "BaseBdev2", 00:07:16.345 "uuid": "e4648853-0d35-4541-ab93-fc59d4fa0f46", 00:07:16.345 "is_configured": true, 00:07:16.345 "data_offset": 0, 00:07:16.345 "data_size": 65536 00:07:16.345 } 00:07:16.345 ] 00:07:16.345 }' 00:07:16.345 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.345 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.605 [2024-12-07 17:23:49.886880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.605 "name": "Existed_Raid", 00:07:16.605 "aliases": [ 00:07:16.605 "f53dabac-1ee2-424f-8445-31714a7e79d6" 00:07:16.605 ], 00:07:16.605 "product_name": "Raid Volume", 00:07:16.605 "block_size": 512, 00:07:16.605 "num_blocks": 131072, 00:07:16.605 "uuid": "f53dabac-1ee2-424f-8445-31714a7e79d6", 00:07:16.605 "assigned_rate_limits": { 00:07:16.605 "rw_ios_per_sec": 0, 00:07:16.605 "rw_mbytes_per_sec": 0, 00:07:16.605 "r_mbytes_per_sec": 0, 00:07:16.605 "w_mbytes_per_sec": 0 00:07:16.605 }, 00:07:16.605 "claimed": false, 00:07:16.605 "zoned": false, 00:07:16.605 "supported_io_types": { 00:07:16.605 "read": true, 00:07:16.605 "write": true, 00:07:16.605 "unmap": true, 00:07:16.605 "flush": true, 00:07:16.605 "reset": true, 00:07:16.605 "nvme_admin": false, 00:07:16.605 "nvme_io": false, 00:07:16.605 "nvme_io_md": false, 00:07:16.605 "write_zeroes": true, 00:07:16.605 "zcopy": false, 00:07:16.605 "get_zone_info": false, 00:07:16.605 "zone_management": false, 00:07:16.605 "zone_append": false, 00:07:16.605 "compare": false, 00:07:16.605 "compare_and_write": false, 00:07:16.605 "abort": false, 00:07:16.605 "seek_hole": false, 00:07:16.605 "seek_data": false, 00:07:16.605 "copy": false, 00:07:16.605 "nvme_iov_md": false 00:07:16.605 }, 00:07:16.605 "memory_domains": [ 00:07:16.605 { 00:07:16.605 "dma_device_id": "system", 00:07:16.605 "dma_device_type": 1 00:07:16.605 }, 00:07:16.605 { 00:07:16.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.605 "dma_device_type": 2 00:07:16.605 }, 00:07:16.605 { 00:07:16.605 "dma_device_id": "system", 00:07:16.605 "dma_device_type": 1 00:07:16.605 }, 00:07:16.605 { 00:07:16.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.605 "dma_device_type": 2 00:07:16.605 } 00:07:16.605 ], 00:07:16.605 "driver_specific": { 00:07:16.605 "raid": { 00:07:16.605 "uuid": "f53dabac-1ee2-424f-8445-31714a7e79d6", 00:07:16.605 "strip_size_kb": 64, 00:07:16.605 "state": "online", 00:07:16.605 "raid_level": "raid0", 00:07:16.605 "superblock": false, 00:07:16.605 "num_base_bdevs": 2, 00:07:16.605 "num_base_bdevs_discovered": 2, 00:07:16.605 "num_base_bdevs_operational": 2, 00:07:16.605 "base_bdevs_list": [ 00:07:16.605 { 00:07:16.605 "name": "BaseBdev1", 00:07:16.605 "uuid": "32c892db-ab9e-4e13-a9c8-73494baa0f73", 00:07:16.605 "is_configured": true, 00:07:16.605 "data_offset": 0, 00:07:16.605 "data_size": 65536 00:07:16.605 }, 00:07:16.605 { 00:07:16.605 "name": "BaseBdev2", 00:07:16.605 "uuid": "e4648853-0d35-4541-ab93-fc59d4fa0f46", 00:07:16.605 "is_configured": true, 00:07:16.605 "data_offset": 0, 00:07:16.605 "data_size": 65536 00:07:16.605 } 00:07:16.605 ] 00:07:16.605 } 00:07:16.605 } 00:07:16.605 }' 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:16.605 BaseBdev2' 00:07:16.605 17:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.865 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.865 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.865 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:16.865 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.865 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.866 [2024-12-07 17:23:50.090243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.866 [2024-12-07 17:23:50.090278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.866 [2024-12-07 17:23:50.090332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.866 "name": "Existed_Raid", 00:07:16.866 "uuid": "f53dabac-1ee2-424f-8445-31714a7e79d6", 00:07:16.866 "strip_size_kb": 64, 00:07:16.866 "state": "offline", 00:07:16.866 "raid_level": "raid0", 00:07:16.866 "superblock": false, 00:07:16.866 "num_base_bdevs": 2, 00:07:16.866 "num_base_bdevs_discovered": 1, 00:07:16.866 "num_base_bdevs_operational": 1, 00:07:16.866 "base_bdevs_list": [ 00:07:16.866 { 00:07:16.866 "name": null, 00:07:16.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.866 "is_configured": false, 00:07:16.866 "data_offset": 0, 00:07:16.866 "data_size": 65536 00:07:16.866 }, 00:07:16.866 { 00:07:16.866 "name": "BaseBdev2", 00:07:16.866 "uuid": "e4648853-0d35-4541-ab93-fc59d4fa0f46", 00:07:16.866 "is_configured": true, 00:07:16.866 "data_offset": 0, 00:07:16.866 "data_size": 65536 00:07:16.866 } 00:07:16.866 ] 00:07:16.866 }' 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.866 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.436 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.436 [2024-12-07 17:23:50.725495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.436 [2024-12-07 17:23:50.725600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60731 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60731 ']' 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60731 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60731 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60731' 00:07:17.696 killing process with pid 60731 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60731 00:07:17.696 [2024-12-07 17:23:50.905401] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.696 17:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60731 00:07:17.696 [2024-12-07 17:23:50.922985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:19.074 00:07:19.074 real 0m5.130s 00:07:19.074 user 0m7.444s 00:07:19.074 sys 0m0.849s 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.074 ************************************ 00:07:19.074 END TEST raid_state_function_test 00:07:19.074 ************************************ 00:07:19.074 17:23:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:19.074 17:23:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:19.074 17:23:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.074 17:23:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.074 ************************************ 00:07:19.074 START TEST raid_state_function_test_sb 00:07:19.074 ************************************ 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60984 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60984' 00:07:19.074 Process raid pid: 60984 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60984 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60984 ']' 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.074 17:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.074 [2024-12-07 17:23:52.206428] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:19.074 [2024-12-07 17:23:52.206628] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.074 [2024-12-07 17:23:52.381411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.341 [2024-12-07 17:23:52.496663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.341 [2024-12-07 17:23:52.691473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.341 [2024-12-07 17:23:52.691510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.909 [2024-12-07 17:23:53.048487] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.909 [2024-12-07 17:23:53.048549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.909 [2024-12-07 17:23:53.048561] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.909 [2024-12-07 17:23:53.048570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.909 "name": "Existed_Raid", 00:07:19.909 "uuid": "2d4685cc-145e-42e5-9ee0-fc8ffafac0ba", 00:07:19.909 "strip_size_kb": 64, 00:07:19.909 "state": "configuring", 00:07:19.909 "raid_level": "raid0", 00:07:19.909 "superblock": true, 00:07:19.909 "num_base_bdevs": 2, 00:07:19.909 "num_base_bdevs_discovered": 0, 00:07:19.909 "num_base_bdevs_operational": 2, 00:07:19.909 "base_bdevs_list": [ 00:07:19.909 { 00:07:19.909 "name": "BaseBdev1", 00:07:19.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.909 "is_configured": false, 00:07:19.909 "data_offset": 0, 00:07:19.909 "data_size": 0 00:07:19.909 }, 00:07:19.909 { 00:07:19.909 "name": "BaseBdev2", 00:07:19.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.909 "is_configured": false, 00:07:19.909 "data_offset": 0, 00:07:19.909 "data_size": 0 00:07:19.909 } 00:07:19.909 ] 00:07:19.909 }' 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.909 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.167 [2024-12-07 17:23:53.487715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.167 [2024-12-07 17:23:53.487814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.167 [2024-12-07 17:23:53.495696] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.167 [2024-12-07 17:23:53.495787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.167 [2024-12-07 17:23:53.495827] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.167 [2024-12-07 17:23:53.495866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.167 [2024-12-07 17:23:53.541980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.167 BaseBdev1 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.167 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.425 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.425 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:20.425 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.425 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.425 [ 00:07:20.425 { 00:07:20.425 "name": "BaseBdev1", 00:07:20.425 "aliases": [ 00:07:20.425 "dc4f35a1-19ef-4c5e-97da-74274ee116ee" 00:07:20.425 ], 00:07:20.426 "product_name": "Malloc disk", 00:07:20.426 "block_size": 512, 00:07:20.426 "num_blocks": 65536, 00:07:20.426 "uuid": "dc4f35a1-19ef-4c5e-97da-74274ee116ee", 00:07:20.426 "assigned_rate_limits": { 00:07:20.426 "rw_ios_per_sec": 0, 00:07:20.426 "rw_mbytes_per_sec": 0, 00:07:20.426 "r_mbytes_per_sec": 0, 00:07:20.426 "w_mbytes_per_sec": 0 00:07:20.426 }, 00:07:20.426 "claimed": true, 00:07:20.426 "claim_type": "exclusive_write", 00:07:20.426 "zoned": false, 00:07:20.426 "supported_io_types": { 00:07:20.426 "read": true, 00:07:20.426 "write": true, 00:07:20.426 "unmap": true, 00:07:20.426 "flush": true, 00:07:20.426 "reset": true, 00:07:20.426 "nvme_admin": false, 00:07:20.426 "nvme_io": false, 00:07:20.426 "nvme_io_md": false, 00:07:20.426 "write_zeroes": true, 00:07:20.426 "zcopy": true, 00:07:20.426 "get_zone_info": false, 00:07:20.426 "zone_management": false, 00:07:20.426 "zone_append": false, 00:07:20.426 "compare": false, 00:07:20.426 "compare_and_write": false, 00:07:20.426 "abort": true, 00:07:20.426 "seek_hole": false, 00:07:20.426 "seek_data": false, 00:07:20.426 "copy": true, 00:07:20.426 "nvme_iov_md": false 00:07:20.426 }, 00:07:20.426 "memory_domains": [ 00:07:20.426 { 00:07:20.426 "dma_device_id": "system", 00:07:20.426 "dma_device_type": 1 00:07:20.426 }, 00:07:20.426 { 00:07:20.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.426 "dma_device_type": 2 00:07:20.426 } 00:07:20.426 ], 00:07:20.426 "driver_specific": {} 00:07:20.426 } 00:07:20.426 ] 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.426 "name": "Existed_Raid", 00:07:20.426 "uuid": "0c3b81a5-9c4d-42e4-ba55-cb6a2b0f803c", 00:07:20.426 "strip_size_kb": 64, 00:07:20.426 "state": "configuring", 00:07:20.426 "raid_level": "raid0", 00:07:20.426 "superblock": true, 00:07:20.426 "num_base_bdevs": 2, 00:07:20.426 "num_base_bdevs_discovered": 1, 00:07:20.426 "num_base_bdevs_operational": 2, 00:07:20.426 "base_bdevs_list": [ 00:07:20.426 { 00:07:20.426 "name": "BaseBdev1", 00:07:20.426 "uuid": "dc4f35a1-19ef-4c5e-97da-74274ee116ee", 00:07:20.426 "is_configured": true, 00:07:20.426 "data_offset": 2048, 00:07:20.426 "data_size": 63488 00:07:20.426 }, 00:07:20.426 { 00:07:20.426 "name": "BaseBdev2", 00:07:20.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.426 "is_configured": false, 00:07:20.426 "data_offset": 0, 00:07:20.426 "data_size": 0 00:07:20.426 } 00:07:20.426 ] 00:07:20.426 }' 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.426 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.692 [2024-12-07 17:23:53.953362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.692 [2024-12-07 17:23:53.953502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.692 [2024-12-07 17:23:53.961432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.692 [2024-12-07 17:23:53.963885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.692 [2024-12-07 17:23:53.964015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.692 17:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.692 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.692 "name": "Existed_Raid", 00:07:20.692 "uuid": "ef2fa03e-11a9-45c3-8b26-5a229cbe4127", 00:07:20.692 "strip_size_kb": 64, 00:07:20.692 "state": "configuring", 00:07:20.692 "raid_level": "raid0", 00:07:20.692 "superblock": true, 00:07:20.692 "num_base_bdevs": 2, 00:07:20.692 "num_base_bdevs_discovered": 1, 00:07:20.692 "num_base_bdevs_operational": 2, 00:07:20.692 "base_bdevs_list": [ 00:07:20.692 { 00:07:20.692 "name": "BaseBdev1", 00:07:20.692 "uuid": "dc4f35a1-19ef-4c5e-97da-74274ee116ee", 00:07:20.692 "is_configured": true, 00:07:20.692 "data_offset": 2048, 00:07:20.692 "data_size": 63488 00:07:20.692 }, 00:07:20.692 { 00:07:20.692 "name": "BaseBdev2", 00:07:20.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.692 "is_configured": false, 00:07:20.692 "data_offset": 0, 00:07:20.692 "data_size": 0 00:07:20.692 } 00:07:20.692 ] 00:07:20.692 }' 00:07:20.692 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.692 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.268 [2024-12-07 17:23:54.383789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:21.268 [2024-12-07 17:23:54.384215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:21.268 [2024-12-07 17:23:54.384275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.268 BaseBdev2 00:07:21.268 [2024-12-07 17:23:54.384618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:21.268 [2024-12-07 17:23:54.384880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:21.268 [2024-12-07 17:23:54.384952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, ra 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.268 id_bdev 0x617000007e80 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:21.268 [2024-12-07 17:23:54.385241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.268 [ 00:07:21.268 { 00:07:21.268 "name": "BaseBdev2", 00:07:21.268 "aliases": [ 00:07:21.268 "b8ea70d1-ee4f-4ee8-99b6-4a4d2e78457b" 00:07:21.268 ], 00:07:21.268 "product_name": "Malloc disk", 00:07:21.268 "block_size": 512, 00:07:21.268 "num_blocks": 65536, 00:07:21.268 "uuid": "b8ea70d1-ee4f-4ee8-99b6-4a4d2e78457b", 00:07:21.268 "assigned_rate_limits": { 00:07:21.268 "rw_ios_per_sec": 0, 00:07:21.268 "rw_mbytes_per_sec": 0, 00:07:21.268 "r_mbytes_per_sec": 0, 00:07:21.268 "w_mbytes_per_sec": 0 00:07:21.268 }, 00:07:21.268 "claimed": true, 00:07:21.268 "claim_type": "exclusive_write", 00:07:21.268 "zoned": false, 00:07:21.268 "supported_io_types": { 00:07:21.268 "read": true, 00:07:21.268 "write": true, 00:07:21.268 "unmap": true, 00:07:21.268 "flush": true, 00:07:21.268 "reset": true, 00:07:21.268 "nvme_admin": false, 00:07:21.268 "nvme_io": false, 00:07:21.268 "nvme_io_md": false, 00:07:21.268 "write_zeroes": true, 00:07:21.268 "zcopy": true, 00:07:21.268 "get_zone_info": false, 00:07:21.268 "zone_management": false, 00:07:21.268 "zone_append": false, 00:07:21.268 "compare": false, 00:07:21.268 "compare_and_write": false, 00:07:21.268 "abort": true, 00:07:21.268 "seek_hole": false, 00:07:21.268 "seek_data": false, 00:07:21.268 "copy": true, 00:07:21.268 "nvme_iov_md": false 00:07:21.268 }, 00:07:21.268 "memory_domains": [ 00:07:21.268 { 00:07:21.268 "dma_device_id": "system", 00:07:21.268 "dma_device_type": 1 00:07:21.268 }, 00:07:21.268 { 00:07:21.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.268 "dma_device_type": 2 00:07:21.268 } 00:07:21.268 ], 00:07:21.268 "driver_specific": {} 00:07:21.268 } 00:07:21.268 ] 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.268 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.268 "name": "Existed_Raid", 00:07:21.268 "uuid": "ef2fa03e-11a9-45c3-8b26-5a229cbe4127", 00:07:21.268 "strip_size_kb": 64, 00:07:21.268 "state": "online", 00:07:21.268 "raid_level": "raid0", 00:07:21.268 "superblock": true, 00:07:21.268 "num_base_bdevs": 2, 00:07:21.268 "num_base_bdevs_discovered": 2, 00:07:21.268 "num_base_bdevs_operational": 2, 00:07:21.268 "base_bdevs_list": [ 00:07:21.268 { 00:07:21.268 "name": "BaseBdev1", 00:07:21.268 "uuid": "dc4f35a1-19ef-4c5e-97da-74274ee116ee", 00:07:21.268 "is_configured": true, 00:07:21.268 "data_offset": 2048, 00:07:21.268 "data_size": 63488 00:07:21.269 }, 00:07:21.269 { 00:07:21.269 "name": "BaseBdev2", 00:07:21.269 "uuid": "b8ea70d1-ee4f-4ee8-99b6-4a4d2e78457b", 00:07:21.269 "is_configured": true, 00:07:21.269 "data_offset": 2048, 00:07:21.269 "data_size": 63488 00:07:21.269 } 00:07:21.269 ] 00:07:21.269 }' 00:07:21.269 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.269 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.527 [2024-12-07 17:23:54.795486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.527 "name": "Existed_Raid", 00:07:21.527 "aliases": [ 00:07:21.527 "ef2fa03e-11a9-45c3-8b26-5a229cbe4127" 00:07:21.527 ], 00:07:21.527 "product_name": "Raid Volume", 00:07:21.527 "block_size": 512, 00:07:21.527 "num_blocks": 126976, 00:07:21.527 "uuid": "ef2fa03e-11a9-45c3-8b26-5a229cbe4127", 00:07:21.527 "assigned_rate_limits": { 00:07:21.527 "rw_ios_per_sec": 0, 00:07:21.527 "rw_mbytes_per_sec": 0, 00:07:21.527 "r_mbytes_per_sec": 0, 00:07:21.527 "w_mbytes_per_sec": 0 00:07:21.527 }, 00:07:21.527 "claimed": false, 00:07:21.527 "zoned": false, 00:07:21.527 "supported_io_types": { 00:07:21.527 "read": true, 00:07:21.527 "write": true, 00:07:21.527 "unmap": true, 00:07:21.527 "flush": true, 00:07:21.527 "reset": true, 00:07:21.527 "nvme_admin": false, 00:07:21.527 "nvme_io": false, 00:07:21.527 "nvme_io_md": false, 00:07:21.527 "write_zeroes": true, 00:07:21.527 "zcopy": false, 00:07:21.527 "get_zone_info": false, 00:07:21.527 "zone_management": false, 00:07:21.527 "zone_append": false, 00:07:21.527 "compare": false, 00:07:21.527 "compare_and_write": false, 00:07:21.527 "abort": false, 00:07:21.527 "seek_hole": false, 00:07:21.527 "seek_data": false, 00:07:21.527 "copy": false, 00:07:21.527 "nvme_iov_md": false 00:07:21.527 }, 00:07:21.527 "memory_domains": [ 00:07:21.527 { 00:07:21.527 "dma_device_id": "system", 00:07:21.527 "dma_device_type": 1 00:07:21.527 }, 00:07:21.527 { 00:07:21.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.527 "dma_device_type": 2 00:07:21.527 }, 00:07:21.527 { 00:07:21.527 "dma_device_id": "system", 00:07:21.527 "dma_device_type": 1 00:07:21.527 }, 00:07:21.527 { 00:07:21.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.527 "dma_device_type": 2 00:07:21.527 } 00:07:21.527 ], 00:07:21.527 "driver_specific": { 00:07:21.527 "raid": { 00:07:21.527 "uuid": "ef2fa03e-11a9-45c3-8b26-5a229cbe4127", 00:07:21.527 "strip_size_kb": 64, 00:07:21.527 "state": "online", 00:07:21.527 "raid_level": "raid0", 00:07:21.527 "superblock": true, 00:07:21.527 "num_base_bdevs": 2, 00:07:21.527 "num_base_bdevs_discovered": 2, 00:07:21.527 "num_base_bdevs_operational": 2, 00:07:21.527 "base_bdevs_list": [ 00:07:21.527 { 00:07:21.527 "name": "BaseBdev1", 00:07:21.527 "uuid": "dc4f35a1-19ef-4c5e-97da-74274ee116ee", 00:07:21.527 "is_configured": true, 00:07:21.527 "data_offset": 2048, 00:07:21.527 "data_size": 63488 00:07:21.527 }, 00:07:21.527 { 00:07:21.527 "name": "BaseBdev2", 00:07:21.527 "uuid": "b8ea70d1-ee4f-4ee8-99b6-4a4d2e78457b", 00:07:21.527 "is_configured": true, 00:07:21.527 "data_offset": 2048, 00:07:21.527 "data_size": 63488 00:07:21.527 } 00:07:21.527 ] 00:07:21.527 } 00:07:21.527 } 00:07:21.527 }' 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:21.527 BaseBdev2' 00:07:21.527 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.786 17:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.786 [2024-12-07 17:23:54.983166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:21.786 [2024-12-07 17:23:54.983207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.786 [2024-12-07 17:23:54.983266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.786 "name": "Existed_Raid", 00:07:21.786 "uuid": "ef2fa03e-11a9-45c3-8b26-5a229cbe4127", 00:07:21.786 "strip_size_kb": 64, 00:07:21.786 "state": "offline", 00:07:21.786 "raid_level": "raid0", 00:07:21.786 "superblock": true, 00:07:21.786 "num_base_bdevs": 2, 00:07:21.786 "num_base_bdevs_discovered": 1, 00:07:21.786 "num_base_bdevs_operational": 1, 00:07:21.786 "base_bdevs_list": [ 00:07:21.786 { 00:07:21.786 "name": null, 00:07:21.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.786 "is_configured": false, 00:07:21.786 "data_offset": 0, 00:07:21.786 "data_size": 63488 00:07:21.786 }, 00:07:21.786 { 00:07:21.786 "name": "BaseBdev2", 00:07:21.786 "uuid": "b8ea70d1-ee4f-4ee8-99b6-4a4d2e78457b", 00:07:21.786 "is_configured": true, 00:07:21.786 "data_offset": 2048, 00:07:21.786 "data_size": 63488 00:07:21.786 } 00:07:21.786 ] 00:07:21.786 }' 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.786 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.350 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:22.350 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.350 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.350 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:22.350 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.350 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.350 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.350 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:22.350 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:22.350 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.351 [2024-12-07 17:23:55.531099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:22.351 [2024-12-07 17:23:55.531163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60984 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60984 ']' 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60984 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60984 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60984' 00:07:22.351 killing process with pid 60984 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60984 00:07:22.351 17:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60984 00:07:22.351 [2024-12-07 17:23:55.710090] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.351 [2024-12-07 17:23:55.730457] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.731 17:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:23.731 00:07:23.731 real 0m4.822s 00:07:23.731 user 0m6.873s 00:07:23.731 sys 0m0.616s 00:07:23.731 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.731 17:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.731 ************************************ 00:07:23.731 END TEST raid_state_function_test_sb 00:07:23.731 ************************************ 00:07:23.731 17:23:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:23.731 17:23:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:23.731 17:23:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.731 17:23:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.731 ************************************ 00:07:23.731 START TEST raid_superblock_test 00:07:23.731 ************************************ 00:07:23.731 17:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:23.731 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:23.731 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:23.731 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61236 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61236 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61236 ']' 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.732 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.732 [2024-12-07 17:23:57.095404] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:23.732 [2024-12-07 17:23:57.095535] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61236 ] 00:07:23.991 [2024-12-07 17:23:57.272272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.251 [2024-12-07 17:23:57.397467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.251 [2024-12-07 17:23:57.599081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.251 [2024-12-07 17:23:57.599122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.821 malloc1 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.821 [2024-12-07 17:23:57.987517] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:24.821 [2024-12-07 17:23:57.987697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.821 [2024-12-07 17:23:57.987746] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:24.821 [2024-12-07 17:23:57.987784] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.821 [2024-12-07 17:23:57.990355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.821 [2024-12-07 17:23:57.990448] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:24.821 pt1 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.821 17:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.821 malloc2 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.821 [2024-12-07 17:23:58.053284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:24.821 [2024-12-07 17:23:58.053366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.821 [2024-12-07 17:23:58.053398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:24.821 [2024-12-07 17:23:58.053409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.821 [2024-12-07 17:23:58.055971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.821 [2024-12-07 17:23:58.056015] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:24.821 pt2 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.821 [2024-12-07 17:23:58.065336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:24.821 [2024-12-07 17:23:58.067480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:24.821 [2024-12-07 17:23:58.067773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:24.821 [2024-12-07 17:23:58.067795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.821 [2024-12-07 17:23:58.068090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:24.821 [2024-12-07 17:23:58.068293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:24.821 [2024-12-07 17:23:58.068308] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:24.821 [2024-12-07 17:23:58.068509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.821 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.822 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.822 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.822 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.822 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.822 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.822 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.822 "name": "raid_bdev1", 00:07:24.822 "uuid": "e7d91f1f-9081-4698-9287-51c40fccd4d9", 00:07:24.822 "strip_size_kb": 64, 00:07:24.822 "state": "online", 00:07:24.822 "raid_level": "raid0", 00:07:24.822 "superblock": true, 00:07:24.822 "num_base_bdevs": 2, 00:07:24.822 "num_base_bdevs_discovered": 2, 00:07:24.822 "num_base_bdevs_operational": 2, 00:07:24.822 "base_bdevs_list": [ 00:07:24.822 { 00:07:24.822 "name": "pt1", 00:07:24.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.822 "is_configured": true, 00:07:24.822 "data_offset": 2048, 00:07:24.822 "data_size": 63488 00:07:24.822 }, 00:07:24.822 { 00:07:24.822 "name": "pt2", 00:07:24.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.822 "is_configured": true, 00:07:24.822 "data_offset": 2048, 00:07:24.822 "data_size": 63488 00:07:24.822 } 00:07:24.822 ] 00:07:24.822 }' 00:07:24.822 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.822 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.392 [2024-12-07 17:23:58.552824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:25.392 "name": "raid_bdev1", 00:07:25.392 "aliases": [ 00:07:25.392 "e7d91f1f-9081-4698-9287-51c40fccd4d9" 00:07:25.392 ], 00:07:25.392 "product_name": "Raid Volume", 00:07:25.392 "block_size": 512, 00:07:25.392 "num_blocks": 126976, 00:07:25.392 "uuid": "e7d91f1f-9081-4698-9287-51c40fccd4d9", 00:07:25.392 "assigned_rate_limits": { 00:07:25.392 "rw_ios_per_sec": 0, 00:07:25.392 "rw_mbytes_per_sec": 0, 00:07:25.392 "r_mbytes_per_sec": 0, 00:07:25.392 "w_mbytes_per_sec": 0 00:07:25.392 }, 00:07:25.392 "claimed": false, 00:07:25.392 "zoned": false, 00:07:25.392 "supported_io_types": { 00:07:25.392 "read": true, 00:07:25.392 "write": true, 00:07:25.392 "unmap": true, 00:07:25.392 "flush": true, 00:07:25.392 "reset": true, 00:07:25.392 "nvme_admin": false, 00:07:25.392 "nvme_io": false, 00:07:25.392 "nvme_io_md": false, 00:07:25.392 "write_zeroes": true, 00:07:25.392 "zcopy": false, 00:07:25.392 "get_zone_info": false, 00:07:25.392 "zone_management": false, 00:07:25.392 "zone_append": false, 00:07:25.392 "compare": false, 00:07:25.392 "compare_and_write": false, 00:07:25.392 "abort": false, 00:07:25.392 "seek_hole": false, 00:07:25.392 "seek_data": false, 00:07:25.392 "copy": false, 00:07:25.392 "nvme_iov_md": false 00:07:25.392 }, 00:07:25.392 "memory_domains": [ 00:07:25.392 { 00:07:25.392 "dma_device_id": "system", 00:07:25.392 "dma_device_type": 1 00:07:25.392 }, 00:07:25.392 { 00:07:25.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.392 "dma_device_type": 2 00:07:25.392 }, 00:07:25.392 { 00:07:25.392 "dma_device_id": "system", 00:07:25.392 "dma_device_type": 1 00:07:25.392 }, 00:07:25.392 { 00:07:25.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.392 "dma_device_type": 2 00:07:25.392 } 00:07:25.392 ], 00:07:25.392 "driver_specific": { 00:07:25.392 "raid": { 00:07:25.392 "uuid": "e7d91f1f-9081-4698-9287-51c40fccd4d9", 00:07:25.392 "strip_size_kb": 64, 00:07:25.392 "state": "online", 00:07:25.392 "raid_level": "raid0", 00:07:25.392 "superblock": true, 00:07:25.392 "num_base_bdevs": 2, 00:07:25.392 "num_base_bdevs_discovered": 2, 00:07:25.392 "num_base_bdevs_operational": 2, 00:07:25.392 "base_bdevs_list": [ 00:07:25.392 { 00:07:25.392 "name": "pt1", 00:07:25.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.392 "is_configured": true, 00:07:25.392 "data_offset": 2048, 00:07:25.392 "data_size": 63488 00:07:25.392 }, 00:07:25.392 { 00:07:25.392 "name": "pt2", 00:07:25.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.392 "is_configured": true, 00:07:25.392 "data_offset": 2048, 00:07:25.392 "data_size": 63488 00:07:25.392 } 00:07:25.392 ] 00:07:25.392 } 00:07:25.392 } 00:07:25.392 }' 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:25.392 pt2' 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.392 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:25.393 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.393 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.393 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.393 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.653 [2024-12-07 17:23:58.784343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e7d91f1f-9081-4698-9287-51c40fccd4d9 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e7d91f1f-9081-4698-9287-51c40fccd4d9 ']' 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.653 [2024-12-07 17:23:58.831982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.653 [2024-12-07 17:23:58.832016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.653 [2024-12-07 17:23:58.832121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.653 [2024-12-07 17:23:58.832190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.653 [2024-12-07 17:23:58.832207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.653 [2024-12-07 17:23:58.971826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:25.653 [2024-12-07 17:23:58.974259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:25.653 [2024-12-07 17:23:58.974402] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:25.653 [2024-12-07 17:23:58.974523] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:25.653 [2024-12-07 17:23:58.974586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.653 [2024-12-07 17:23:58.974649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:25.653 request: 00:07:25.653 { 00:07:25.653 "name": "raid_bdev1", 00:07:25.653 "raid_level": "raid0", 00:07:25.653 "base_bdevs": [ 00:07:25.653 "malloc1", 00:07:25.653 "malloc2" 00:07:25.653 ], 00:07:25.653 "strip_size_kb": 64, 00:07:25.653 "superblock": false, 00:07:25.653 "method": "bdev_raid_create", 00:07:25.653 "req_id": 1 00:07:25.653 } 00:07:25.653 Got JSON-RPC error response 00:07:25.653 response: 00:07:25.653 { 00:07:25.653 "code": -17, 00:07:25.653 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:25.653 } 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.653 17:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.653 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:25.653 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:25.653 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:25.653 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.653 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.913 [2024-12-07 17:23:59.035652] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:25.913 [2024-12-07 17:23:59.035785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.913 [2024-12-07 17:23:59.035825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:25.913 [2024-12-07 17:23:59.035868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.913 [2024-12-07 17:23:59.038491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.913 [2024-12-07 17:23:59.038577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:25.913 [2024-12-07 17:23:59.038709] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:25.913 [2024-12-07 17:23:59.038800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:25.913 pt1 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.913 "name": "raid_bdev1", 00:07:25.913 "uuid": "e7d91f1f-9081-4698-9287-51c40fccd4d9", 00:07:25.913 "strip_size_kb": 64, 00:07:25.913 "state": "configuring", 00:07:25.913 "raid_level": "raid0", 00:07:25.913 "superblock": true, 00:07:25.913 "num_base_bdevs": 2, 00:07:25.913 "num_base_bdevs_discovered": 1, 00:07:25.913 "num_base_bdevs_operational": 2, 00:07:25.913 "base_bdevs_list": [ 00:07:25.913 { 00:07:25.913 "name": "pt1", 00:07:25.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.913 "is_configured": true, 00:07:25.913 "data_offset": 2048, 00:07:25.913 "data_size": 63488 00:07:25.913 }, 00:07:25.913 { 00:07:25.913 "name": null, 00:07:25.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.913 "is_configured": false, 00:07:25.913 "data_offset": 2048, 00:07:25.913 "data_size": 63488 00:07:25.913 } 00:07:25.913 ] 00:07:25.913 }' 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.913 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.173 [2024-12-07 17:23:59.491108] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:26.173 [2024-12-07 17:23:59.491330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.173 [2024-12-07 17:23:59.491377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:26.173 [2024-12-07 17:23:59.491424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.173 [2024-12-07 17:23:59.492031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.173 [2024-12-07 17:23:59.492116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:26.173 [2024-12-07 17:23:59.492263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:26.173 [2024-12-07 17:23:59.492333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:26.173 [2024-12-07 17:23:59.492510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:26.173 [2024-12-07 17:23:59.492557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.173 [2024-12-07 17:23:59.492867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:26.173 [2024-12-07 17:23:59.493094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:26.173 [2024-12-07 17:23:59.493166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:26.173 [2024-12-07 17:23:59.493395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.173 pt2 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.173 "name": "raid_bdev1", 00:07:26.173 "uuid": "e7d91f1f-9081-4698-9287-51c40fccd4d9", 00:07:26.173 "strip_size_kb": 64, 00:07:26.173 "state": "online", 00:07:26.173 "raid_level": "raid0", 00:07:26.173 "superblock": true, 00:07:26.173 "num_base_bdevs": 2, 00:07:26.173 "num_base_bdevs_discovered": 2, 00:07:26.173 "num_base_bdevs_operational": 2, 00:07:26.173 "base_bdevs_list": [ 00:07:26.173 { 00:07:26.173 "name": "pt1", 00:07:26.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.173 "is_configured": true, 00:07:26.173 "data_offset": 2048, 00:07:26.173 "data_size": 63488 00:07:26.173 }, 00:07:26.173 { 00:07:26.173 "name": "pt2", 00:07:26.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.173 "is_configured": true, 00:07:26.173 "data_offset": 2048, 00:07:26.173 "data_size": 63488 00:07:26.173 } 00:07:26.173 ] 00:07:26.173 }' 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.173 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.742 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:26.742 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:26.742 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.742 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.742 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.742 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.742 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.742 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.742 17:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.742 17:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.742 [2024-12-07 17:23:59.990480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.742 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.742 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.742 "name": "raid_bdev1", 00:07:26.742 "aliases": [ 00:07:26.742 "e7d91f1f-9081-4698-9287-51c40fccd4d9" 00:07:26.742 ], 00:07:26.742 "product_name": "Raid Volume", 00:07:26.742 "block_size": 512, 00:07:26.742 "num_blocks": 126976, 00:07:26.742 "uuid": "e7d91f1f-9081-4698-9287-51c40fccd4d9", 00:07:26.742 "assigned_rate_limits": { 00:07:26.742 "rw_ios_per_sec": 0, 00:07:26.742 "rw_mbytes_per_sec": 0, 00:07:26.742 "r_mbytes_per_sec": 0, 00:07:26.742 "w_mbytes_per_sec": 0 00:07:26.742 }, 00:07:26.742 "claimed": false, 00:07:26.742 "zoned": false, 00:07:26.742 "supported_io_types": { 00:07:26.742 "read": true, 00:07:26.742 "write": true, 00:07:26.742 "unmap": true, 00:07:26.742 "flush": true, 00:07:26.742 "reset": true, 00:07:26.742 "nvme_admin": false, 00:07:26.742 "nvme_io": false, 00:07:26.742 "nvme_io_md": false, 00:07:26.742 "write_zeroes": true, 00:07:26.742 "zcopy": false, 00:07:26.742 "get_zone_info": false, 00:07:26.742 "zone_management": false, 00:07:26.742 "zone_append": false, 00:07:26.743 "compare": false, 00:07:26.743 "compare_and_write": false, 00:07:26.743 "abort": false, 00:07:26.743 "seek_hole": false, 00:07:26.743 "seek_data": false, 00:07:26.743 "copy": false, 00:07:26.743 "nvme_iov_md": false 00:07:26.743 }, 00:07:26.743 "memory_domains": [ 00:07:26.743 { 00:07:26.743 "dma_device_id": "system", 00:07:26.743 "dma_device_type": 1 00:07:26.743 }, 00:07:26.743 { 00:07:26.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.743 "dma_device_type": 2 00:07:26.743 }, 00:07:26.743 { 00:07:26.743 "dma_device_id": "system", 00:07:26.743 "dma_device_type": 1 00:07:26.743 }, 00:07:26.743 { 00:07:26.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.743 "dma_device_type": 2 00:07:26.743 } 00:07:26.743 ], 00:07:26.743 "driver_specific": { 00:07:26.743 "raid": { 00:07:26.743 "uuid": "e7d91f1f-9081-4698-9287-51c40fccd4d9", 00:07:26.743 "strip_size_kb": 64, 00:07:26.743 "state": "online", 00:07:26.743 "raid_level": "raid0", 00:07:26.743 "superblock": true, 00:07:26.743 "num_base_bdevs": 2, 00:07:26.743 "num_base_bdevs_discovered": 2, 00:07:26.743 "num_base_bdevs_operational": 2, 00:07:26.743 "base_bdevs_list": [ 00:07:26.743 { 00:07:26.743 "name": "pt1", 00:07:26.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.743 "is_configured": true, 00:07:26.743 "data_offset": 2048, 00:07:26.743 "data_size": 63488 00:07:26.743 }, 00:07:26.743 { 00:07:26.743 "name": "pt2", 00:07:26.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.743 "is_configured": true, 00:07:26.743 "data_offset": 2048, 00:07:26.743 "data_size": 63488 00:07:26.743 } 00:07:26.743 ] 00:07:26.743 } 00:07:26.743 } 00:07:26.743 }' 00:07:26.743 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.743 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:26.743 pt2' 00:07:26.743 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.743 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.743 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.743 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:26.743 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.743 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.743 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.002 [2024-12-07 17:24:00.210309] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e7d91f1f-9081-4698-9287-51c40fccd4d9 '!=' e7d91f1f-9081-4698-9287-51c40fccd4d9 ']' 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.002 17:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61236 00:07:27.003 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61236 ']' 00:07:27.003 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61236 00:07:27.003 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:27.003 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.003 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61236 00:07:27.003 killing process with pid 61236 00:07:27.003 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.003 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.003 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61236' 00:07:27.003 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61236 00:07:27.003 [2024-12-07 17:24:00.292003] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.003 [2024-12-07 17:24:00.292118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.003 [2024-12-07 17:24:00.292179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.003 [2024-12-07 17:24:00.292194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:27.003 17:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61236 00:07:27.261 [2024-12-07 17:24:00.522547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.752 17:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:28.752 00:07:28.752 real 0m4.757s 00:07:28.752 user 0m6.660s 00:07:28.752 sys 0m0.776s 00:07:28.752 17:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.752 17:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.752 ************************************ 00:07:28.752 END TEST raid_superblock_test 00:07:28.752 ************************************ 00:07:28.752 17:24:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:28.752 17:24:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:28.752 17:24:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.752 17:24:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.752 ************************************ 00:07:28.752 START TEST raid_read_error_test 00:07:28.752 ************************************ 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wAEyQaHTGC 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61442 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61442 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61442 ']' 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.752 17:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.752 [2024-12-07 17:24:01.933279] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:28.752 [2024-12-07 17:24:01.933394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61442 ] 00:07:28.752 [2024-12-07 17:24:02.105847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.010 [2024-12-07 17:24:02.247264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.269 [2024-12-07 17:24:02.489072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.269 [2024-12-07 17:24:02.489130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.528 BaseBdev1_malloc 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.528 true 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.528 [2024-12-07 17:24:02.836078] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:29.528 [2024-12-07 17:24:02.836177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.528 [2024-12-07 17:24:02.836209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:29.528 [2024-12-07 17:24:02.836224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.528 [2024-12-07 17:24:02.838768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.528 [2024-12-07 17:24:02.838827] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:29.528 BaseBdev1 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.528 BaseBdev2_malloc 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.528 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:29.529 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.529 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.529 true 00:07:29.529 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.529 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:29.529 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.529 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.786 [2024-12-07 17:24:02.910459] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:29.786 [2024-12-07 17:24:02.910545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.786 [2024-12-07 17:24:02.910568] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:29.786 [2024-12-07 17:24:02.910582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.786 [2024-12-07 17:24:02.913003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.786 [2024-12-07 17:24:02.913047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:29.786 BaseBdev2 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.786 [2024-12-07 17:24:02.922583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.786 [2024-12-07 17:24:02.924889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.786 [2024-12-07 17:24:02.925178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:29.786 [2024-12-07 17:24:02.925223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.786 [2024-12-07 17:24:02.925552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:29.786 [2024-12-07 17:24:02.925793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:29.786 [2024-12-07 17:24:02.925808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:29.786 [2024-12-07 17:24:02.926123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.786 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.787 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.787 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.787 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.787 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.787 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.787 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.787 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.787 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.787 "name": "raid_bdev1", 00:07:29.787 "uuid": "148eb895-07f8-436a-8c0b-ff76336da903", 00:07:29.787 "strip_size_kb": 64, 00:07:29.787 "state": "online", 00:07:29.787 "raid_level": "raid0", 00:07:29.787 "superblock": true, 00:07:29.787 "num_base_bdevs": 2, 00:07:29.787 "num_base_bdevs_discovered": 2, 00:07:29.787 "num_base_bdevs_operational": 2, 00:07:29.787 "base_bdevs_list": [ 00:07:29.787 { 00:07:29.787 "name": "BaseBdev1", 00:07:29.787 "uuid": "ca7dd831-68ae-5ea8-93d8-d11daaeb5914", 00:07:29.787 "is_configured": true, 00:07:29.787 "data_offset": 2048, 00:07:29.787 "data_size": 63488 00:07:29.787 }, 00:07:29.787 { 00:07:29.787 "name": "BaseBdev2", 00:07:29.787 "uuid": "622631f7-7096-59c1-8241-d9a8daadfccc", 00:07:29.787 "is_configured": true, 00:07:29.787 "data_offset": 2048, 00:07:29.787 "data_size": 63488 00:07:29.787 } 00:07:29.787 ] 00:07:29.787 }' 00:07:29.787 17:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.787 17:24:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.044 17:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:30.044 17:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:30.302 [2024-12-07 17:24:03.467041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.238 "name": "raid_bdev1", 00:07:31.238 "uuid": "148eb895-07f8-436a-8c0b-ff76336da903", 00:07:31.238 "strip_size_kb": 64, 00:07:31.238 "state": "online", 00:07:31.238 "raid_level": "raid0", 00:07:31.238 "superblock": true, 00:07:31.238 "num_base_bdevs": 2, 00:07:31.238 "num_base_bdevs_discovered": 2, 00:07:31.238 "num_base_bdevs_operational": 2, 00:07:31.238 "base_bdevs_list": [ 00:07:31.238 { 00:07:31.238 "name": "BaseBdev1", 00:07:31.238 "uuid": "ca7dd831-68ae-5ea8-93d8-d11daaeb5914", 00:07:31.238 "is_configured": true, 00:07:31.238 "data_offset": 2048, 00:07:31.238 "data_size": 63488 00:07:31.238 }, 00:07:31.238 { 00:07:31.238 "name": "BaseBdev2", 00:07:31.238 "uuid": "622631f7-7096-59c1-8241-d9a8daadfccc", 00:07:31.238 "is_configured": true, 00:07:31.238 "data_offset": 2048, 00:07:31.238 "data_size": 63488 00:07:31.238 } 00:07:31.238 ] 00:07:31.238 }' 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.238 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.497 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.497 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.497 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.497 [2024-12-07 17:24:04.849126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.497 [2024-12-07 17:24:04.849305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.497 [2024-12-07 17:24:04.852811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.497 [2024-12-07 17:24:04.852963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.498 [2024-12-07 17:24:04.853059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.498 [2024-12-07 17:24:04.853130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:31.498 { 00:07:31.498 "results": [ 00:07:31.498 { 00:07:31.498 "job": "raid_bdev1", 00:07:31.498 "core_mask": "0x1", 00:07:31.498 "workload": "randrw", 00:07:31.498 "percentage": 50, 00:07:31.498 "status": "finished", 00:07:31.498 "queue_depth": 1, 00:07:31.498 "io_size": 131072, 00:07:31.498 "runtime": 1.382926, 00:07:31.498 "iops": 12229.866240131432, 00:07:31.498 "mibps": 1528.733280016429, 00:07:31.498 "io_failed": 1, 00:07:31.498 "io_timeout": 0, 00:07:31.498 "avg_latency_us": 114.40157839323822, 00:07:31.498 "min_latency_us": 26.382532751091702, 00:07:31.498 "max_latency_us": 1788.646288209607 00:07:31.498 } 00:07:31.498 ], 00:07:31.498 "core_count": 1 00:07:31.498 } 00:07:31.498 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.498 17:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61442 00:07:31.498 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61442 ']' 00:07:31.498 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61442 00:07:31.498 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:31.498 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.498 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61442 00:07:31.757 killing process with pid 61442 00:07:31.757 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.757 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.757 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61442' 00:07:31.757 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61442 00:07:31.757 [2024-12-07 17:24:04.899723] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.757 17:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61442 00:07:31.757 [2024-12-07 17:24:05.082379] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.661 17:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wAEyQaHTGC 00:07:33.661 17:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:33.661 17:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:33.661 17:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:33.661 17:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:33.661 17:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.661 17:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.661 17:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:33.661 00:07:33.661 real 0m4.729s 00:07:33.661 user 0m5.495s 00:07:33.661 sys 0m0.654s 00:07:33.661 17:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.661 ************************************ 00:07:33.661 END TEST raid_read_error_test 00:07:33.661 ************************************ 00:07:33.661 17:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.661 17:24:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:33.661 17:24:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:33.661 17:24:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.661 17:24:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.661 ************************************ 00:07:33.661 START TEST raid_write_error_test 00:07:33.661 ************************************ 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZD0trPZM78 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61593 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61593 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61593 ']' 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.661 17:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.661 [2024-12-07 17:24:06.731666] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:33.661 [2024-12-07 17:24:06.731850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:07:33.661 [2024-12-07 17:24:06.906454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.661 [2024-12-07 17:24:07.033828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.920 [2024-12-07 17:24:07.267577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.920 [2024-12-07 17:24:07.267714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.488 BaseBdev1_malloc 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.488 true 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.488 [2024-12-07 17:24:07.626158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:34.488 [2024-12-07 17:24:07.626316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.488 [2024-12-07 17:24:07.626344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:34.488 [2024-12-07 17:24:07.626357] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.488 [2024-12-07 17:24:07.628735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.488 [2024-12-07 17:24:07.628785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:34.488 BaseBdev1 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.488 BaseBdev2_malloc 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.488 true 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:34.488 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.489 [2024-12-07 17:24:07.700543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:34.489 [2024-12-07 17:24:07.700608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.489 [2024-12-07 17:24:07.700627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:34.489 [2024-12-07 17:24:07.700641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.489 [2024-12-07 17:24:07.702974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.489 [2024-12-07 17:24:07.703025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:34.489 BaseBdev2 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.489 [2024-12-07 17:24:07.712596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:34.489 [2024-12-07 17:24:07.714670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:34.489 [2024-12-07 17:24:07.714986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:34.489 [2024-12-07 17:24:07.715022] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:34.489 [2024-12-07 17:24:07.715264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:34.489 [2024-12-07 17:24:07.715460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:34.489 [2024-12-07 17:24:07.715475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:34.489 [2024-12-07 17:24:07.715636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.489 "name": "raid_bdev1", 00:07:34.489 "uuid": "7538a0c6-7448-4e83-a737-fe5c228094d9", 00:07:34.489 "strip_size_kb": 64, 00:07:34.489 "state": "online", 00:07:34.489 "raid_level": "raid0", 00:07:34.489 "superblock": true, 00:07:34.489 "num_base_bdevs": 2, 00:07:34.489 "num_base_bdevs_discovered": 2, 00:07:34.489 "num_base_bdevs_operational": 2, 00:07:34.489 "base_bdevs_list": [ 00:07:34.489 { 00:07:34.489 "name": "BaseBdev1", 00:07:34.489 "uuid": "f6abfde6-2767-5200-801a-37afda22323a", 00:07:34.489 "is_configured": true, 00:07:34.489 "data_offset": 2048, 00:07:34.489 "data_size": 63488 00:07:34.489 }, 00:07:34.489 { 00:07:34.489 "name": "BaseBdev2", 00:07:34.489 "uuid": "93ad367b-dae9-530e-a3d1-62b1034621fe", 00:07:34.489 "is_configured": true, 00:07:34.489 "data_offset": 2048, 00:07:34.489 "data_size": 63488 00:07:34.489 } 00:07:34.489 ] 00:07:34.489 }' 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.489 17:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.056 17:24:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:35.056 17:24:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:35.057 [2024-12-07 17:24:08.217089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:35.993 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:35.993 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.993 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.993 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.993 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:35.993 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:35.993 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:35.993 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:35.993 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.993 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.994 "name": "raid_bdev1", 00:07:35.994 "uuid": "7538a0c6-7448-4e83-a737-fe5c228094d9", 00:07:35.994 "strip_size_kb": 64, 00:07:35.994 "state": "online", 00:07:35.994 "raid_level": "raid0", 00:07:35.994 "superblock": true, 00:07:35.994 "num_base_bdevs": 2, 00:07:35.994 "num_base_bdevs_discovered": 2, 00:07:35.994 "num_base_bdevs_operational": 2, 00:07:35.994 "base_bdevs_list": [ 00:07:35.994 { 00:07:35.994 "name": "BaseBdev1", 00:07:35.994 "uuid": "f6abfde6-2767-5200-801a-37afda22323a", 00:07:35.994 "is_configured": true, 00:07:35.994 "data_offset": 2048, 00:07:35.994 "data_size": 63488 00:07:35.994 }, 00:07:35.994 { 00:07:35.994 "name": "BaseBdev2", 00:07:35.994 "uuid": "93ad367b-dae9-530e-a3d1-62b1034621fe", 00:07:35.994 "is_configured": true, 00:07:35.994 "data_offset": 2048, 00:07:35.994 "data_size": 63488 00:07:35.994 } 00:07:35.994 ] 00:07:35.994 }' 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.994 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.253 [2024-12-07 17:24:09.581530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:36.253 [2024-12-07 17:24:09.581685] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:36.253 [2024-12-07 17:24:09.584429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.253 [2024-12-07 17:24:09.584537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.253 [2024-12-07 17:24:09.584608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:36.253 [2024-12-07 17:24:09.584692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:36.253 { 00:07:36.253 "results": [ 00:07:36.253 { 00:07:36.253 "job": "raid_bdev1", 00:07:36.253 "core_mask": "0x1", 00:07:36.253 "workload": "randrw", 00:07:36.253 "percentage": 50, 00:07:36.253 "status": "finished", 00:07:36.253 "queue_depth": 1, 00:07:36.253 "io_size": 131072, 00:07:36.253 "runtime": 1.365347, 00:07:36.253 "iops": 13977.399152010441, 00:07:36.253 "mibps": 1747.1748940013051, 00:07:36.253 "io_failed": 1, 00:07:36.253 "io_timeout": 0, 00:07:36.253 "avg_latency_us": 99.88182548081268, 00:07:36.253 "min_latency_us": 27.388646288209607, 00:07:36.253 "max_latency_us": 1430.9170305676855 00:07:36.253 } 00:07:36.253 ], 00:07:36.253 "core_count": 1 00:07:36.253 } 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61593 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61593 ']' 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61593 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61593 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.253 killing process with pid 61593 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61593' 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61593 00:07:36.253 [2024-12-07 17:24:09.630814] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:36.253 17:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61593 00:07:36.511 [2024-12-07 17:24:09.775319] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.888 17:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZD0trPZM78 00:07:37.888 17:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:37.888 17:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:37.888 17:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:37.888 17:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:37.888 17:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:37.888 17:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:37.888 ************************************ 00:07:37.888 END TEST raid_write_error_test 00:07:37.888 ************************************ 00:07:37.888 17:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:37.888 00:07:37.888 real 0m4.433s 00:07:37.888 user 0m5.122s 00:07:37.888 sys 0m0.656s 00:07:37.888 17:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.888 17:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.888 17:24:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:37.888 17:24:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:37.888 17:24:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:37.888 17:24:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.888 17:24:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.888 ************************************ 00:07:37.888 START TEST raid_state_function_test 00:07:37.888 ************************************ 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61731 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61731' 00:07:37.888 Process raid pid: 61731 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61731 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61731 ']' 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.888 17:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.888 [2024-12-07 17:24:11.225700] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:37.888 [2024-12-07 17:24:11.225973] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.146 [2024-12-07 17:24:11.403049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.404 [2024-12-07 17:24:11.536599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.404 [2024-12-07 17:24:11.772251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.404 [2024-12-07 17:24:11.772399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.972 [2024-12-07 17:24:12.062253] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.972 [2024-12-07 17:24:12.062338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.972 [2024-12-07 17:24:12.062350] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.972 [2024-12-07 17:24:12.062363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.972 "name": "Existed_Raid", 00:07:38.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.972 "strip_size_kb": 64, 00:07:38.972 "state": "configuring", 00:07:38.972 "raid_level": "concat", 00:07:38.972 "superblock": false, 00:07:38.972 "num_base_bdevs": 2, 00:07:38.972 "num_base_bdevs_discovered": 0, 00:07:38.972 "num_base_bdevs_operational": 2, 00:07:38.972 "base_bdevs_list": [ 00:07:38.972 { 00:07:38.972 "name": "BaseBdev1", 00:07:38.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.972 "is_configured": false, 00:07:38.972 "data_offset": 0, 00:07:38.972 "data_size": 0 00:07:38.972 }, 00:07:38.972 { 00:07:38.972 "name": "BaseBdev2", 00:07:38.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.972 "is_configured": false, 00:07:38.972 "data_offset": 0, 00:07:38.972 "data_size": 0 00:07:38.972 } 00:07:38.972 ] 00:07:38.972 }' 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.972 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.241 [2024-12-07 17:24:12.561456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.241 [2024-12-07 17:24:12.561619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.241 [2024-12-07 17:24:12.573395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:39.241 [2024-12-07 17:24:12.573521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:39.241 [2024-12-07 17:24:12.573556] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.241 [2024-12-07 17:24:12.573587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.241 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.522 [2024-12-07 17:24:12.630241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.522 BaseBdev1 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.522 [ 00:07:39.522 { 00:07:39.522 "name": "BaseBdev1", 00:07:39.522 "aliases": [ 00:07:39.522 "8a6a0fa7-df5a-4424-965b-e32341ac0def" 00:07:39.522 ], 00:07:39.522 "product_name": "Malloc disk", 00:07:39.522 "block_size": 512, 00:07:39.522 "num_blocks": 65536, 00:07:39.522 "uuid": "8a6a0fa7-df5a-4424-965b-e32341ac0def", 00:07:39.522 "assigned_rate_limits": { 00:07:39.522 "rw_ios_per_sec": 0, 00:07:39.522 "rw_mbytes_per_sec": 0, 00:07:39.522 "r_mbytes_per_sec": 0, 00:07:39.522 "w_mbytes_per_sec": 0 00:07:39.522 }, 00:07:39.522 "claimed": true, 00:07:39.522 "claim_type": "exclusive_write", 00:07:39.522 "zoned": false, 00:07:39.522 "supported_io_types": { 00:07:39.522 "read": true, 00:07:39.522 "write": true, 00:07:39.522 "unmap": true, 00:07:39.522 "flush": true, 00:07:39.522 "reset": true, 00:07:39.522 "nvme_admin": false, 00:07:39.522 "nvme_io": false, 00:07:39.522 "nvme_io_md": false, 00:07:39.522 "write_zeroes": true, 00:07:39.522 "zcopy": true, 00:07:39.522 "get_zone_info": false, 00:07:39.522 "zone_management": false, 00:07:39.522 "zone_append": false, 00:07:39.522 "compare": false, 00:07:39.522 "compare_and_write": false, 00:07:39.522 "abort": true, 00:07:39.522 "seek_hole": false, 00:07:39.522 "seek_data": false, 00:07:39.522 "copy": true, 00:07:39.522 "nvme_iov_md": false 00:07:39.522 }, 00:07:39.522 "memory_domains": [ 00:07:39.522 { 00:07:39.522 "dma_device_id": "system", 00:07:39.522 "dma_device_type": 1 00:07:39.522 }, 00:07:39.522 { 00:07:39.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.522 "dma_device_type": 2 00:07:39.522 } 00:07:39.522 ], 00:07:39.522 "driver_specific": {} 00:07:39.522 } 00:07:39.522 ] 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.522 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.522 "name": "Existed_Raid", 00:07:39.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.523 "strip_size_kb": 64, 00:07:39.523 "state": "configuring", 00:07:39.523 "raid_level": "concat", 00:07:39.523 "superblock": false, 00:07:39.523 "num_base_bdevs": 2, 00:07:39.523 "num_base_bdevs_discovered": 1, 00:07:39.523 "num_base_bdevs_operational": 2, 00:07:39.523 "base_bdevs_list": [ 00:07:39.523 { 00:07:39.523 "name": "BaseBdev1", 00:07:39.523 "uuid": "8a6a0fa7-df5a-4424-965b-e32341ac0def", 00:07:39.523 "is_configured": true, 00:07:39.523 "data_offset": 0, 00:07:39.523 "data_size": 65536 00:07:39.523 }, 00:07:39.523 { 00:07:39.523 "name": "BaseBdev2", 00:07:39.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.523 "is_configured": false, 00:07:39.523 "data_offset": 0, 00:07:39.523 "data_size": 0 00:07:39.523 } 00:07:39.523 ] 00:07:39.523 }' 00:07:39.523 17:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.523 17:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.795 [2024-12-07 17:24:13.145665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.795 [2024-12-07 17:24:13.145852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.795 [2024-12-07 17:24:13.157654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.795 [2024-12-07 17:24:13.159845] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.795 [2024-12-07 17:24:13.159957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.795 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.796 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.055 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.055 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.055 "name": "Existed_Raid", 00:07:40.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.055 "strip_size_kb": 64, 00:07:40.055 "state": "configuring", 00:07:40.055 "raid_level": "concat", 00:07:40.055 "superblock": false, 00:07:40.055 "num_base_bdevs": 2, 00:07:40.055 "num_base_bdevs_discovered": 1, 00:07:40.056 "num_base_bdevs_operational": 2, 00:07:40.056 "base_bdevs_list": [ 00:07:40.056 { 00:07:40.056 "name": "BaseBdev1", 00:07:40.056 "uuid": "8a6a0fa7-df5a-4424-965b-e32341ac0def", 00:07:40.056 "is_configured": true, 00:07:40.056 "data_offset": 0, 00:07:40.056 "data_size": 65536 00:07:40.056 }, 00:07:40.056 { 00:07:40.056 "name": "BaseBdev2", 00:07:40.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.056 "is_configured": false, 00:07:40.056 "data_offset": 0, 00:07:40.056 "data_size": 0 00:07:40.056 } 00:07:40.056 ] 00:07:40.056 }' 00:07:40.056 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.056 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.316 [2024-12-07 17:24:13.613754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:40.316 [2024-12-07 17:24:13.613908] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:40.316 [2024-12-07 17:24:13.613963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:40.316 [2024-12-07 17:24:13.614321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:40.316 [2024-12-07 17:24:13.614584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:40.316 [2024-12-07 17:24:13.614640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:40.316 [2024-12-07 17:24:13.615020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.316 BaseBdev2 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.316 [ 00:07:40.316 { 00:07:40.316 "name": "BaseBdev2", 00:07:40.316 "aliases": [ 00:07:40.316 "52a9cd14-b1cf-4545-8679-05aea887ae9c" 00:07:40.316 ], 00:07:40.316 "product_name": "Malloc disk", 00:07:40.316 "block_size": 512, 00:07:40.316 "num_blocks": 65536, 00:07:40.316 "uuid": "52a9cd14-b1cf-4545-8679-05aea887ae9c", 00:07:40.316 "assigned_rate_limits": { 00:07:40.316 "rw_ios_per_sec": 0, 00:07:40.316 "rw_mbytes_per_sec": 0, 00:07:40.316 "r_mbytes_per_sec": 0, 00:07:40.316 "w_mbytes_per_sec": 0 00:07:40.316 }, 00:07:40.316 "claimed": true, 00:07:40.316 "claim_type": "exclusive_write", 00:07:40.316 "zoned": false, 00:07:40.316 "supported_io_types": { 00:07:40.316 "read": true, 00:07:40.316 "write": true, 00:07:40.316 "unmap": true, 00:07:40.316 "flush": true, 00:07:40.316 "reset": true, 00:07:40.316 "nvme_admin": false, 00:07:40.316 "nvme_io": false, 00:07:40.316 "nvme_io_md": false, 00:07:40.316 "write_zeroes": true, 00:07:40.316 "zcopy": true, 00:07:40.316 "get_zone_info": false, 00:07:40.316 "zone_management": false, 00:07:40.316 "zone_append": false, 00:07:40.316 "compare": false, 00:07:40.316 "compare_and_write": false, 00:07:40.316 "abort": true, 00:07:40.316 "seek_hole": false, 00:07:40.316 "seek_data": false, 00:07:40.316 "copy": true, 00:07:40.316 "nvme_iov_md": false 00:07:40.316 }, 00:07:40.316 "memory_domains": [ 00:07:40.316 { 00:07:40.316 "dma_device_id": "system", 00:07:40.316 "dma_device_type": 1 00:07:40.316 }, 00:07:40.316 { 00:07:40.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.316 "dma_device_type": 2 00:07:40.316 } 00:07:40.316 ], 00:07:40.316 "driver_specific": {} 00:07:40.316 } 00:07:40.316 ] 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.316 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.576 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.576 "name": "Existed_Raid", 00:07:40.576 "uuid": "48637966-d827-400a-a6d4-56d4928b134e", 00:07:40.576 "strip_size_kb": 64, 00:07:40.576 "state": "online", 00:07:40.576 "raid_level": "concat", 00:07:40.576 "superblock": false, 00:07:40.576 "num_base_bdevs": 2, 00:07:40.576 "num_base_bdevs_discovered": 2, 00:07:40.576 "num_base_bdevs_operational": 2, 00:07:40.576 "base_bdevs_list": [ 00:07:40.576 { 00:07:40.576 "name": "BaseBdev1", 00:07:40.576 "uuid": "8a6a0fa7-df5a-4424-965b-e32341ac0def", 00:07:40.576 "is_configured": true, 00:07:40.576 "data_offset": 0, 00:07:40.576 "data_size": 65536 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "name": "BaseBdev2", 00:07:40.576 "uuid": "52a9cd14-b1cf-4545-8679-05aea887ae9c", 00:07:40.576 "is_configured": true, 00:07:40.576 "data_offset": 0, 00:07:40.576 "data_size": 65536 00:07:40.576 } 00:07:40.576 ] 00:07:40.576 }' 00:07:40.576 17:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.576 17:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.835 [2024-12-07 17:24:14.105286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.835 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.835 "name": "Existed_Raid", 00:07:40.835 "aliases": [ 00:07:40.835 "48637966-d827-400a-a6d4-56d4928b134e" 00:07:40.835 ], 00:07:40.835 "product_name": "Raid Volume", 00:07:40.835 "block_size": 512, 00:07:40.835 "num_blocks": 131072, 00:07:40.835 "uuid": "48637966-d827-400a-a6d4-56d4928b134e", 00:07:40.835 "assigned_rate_limits": { 00:07:40.835 "rw_ios_per_sec": 0, 00:07:40.835 "rw_mbytes_per_sec": 0, 00:07:40.835 "r_mbytes_per_sec": 0, 00:07:40.836 "w_mbytes_per_sec": 0 00:07:40.836 }, 00:07:40.836 "claimed": false, 00:07:40.836 "zoned": false, 00:07:40.836 "supported_io_types": { 00:07:40.836 "read": true, 00:07:40.836 "write": true, 00:07:40.836 "unmap": true, 00:07:40.836 "flush": true, 00:07:40.836 "reset": true, 00:07:40.836 "nvme_admin": false, 00:07:40.836 "nvme_io": false, 00:07:40.836 "nvme_io_md": false, 00:07:40.836 "write_zeroes": true, 00:07:40.836 "zcopy": false, 00:07:40.836 "get_zone_info": false, 00:07:40.836 "zone_management": false, 00:07:40.836 "zone_append": false, 00:07:40.836 "compare": false, 00:07:40.836 "compare_and_write": false, 00:07:40.836 "abort": false, 00:07:40.836 "seek_hole": false, 00:07:40.836 "seek_data": false, 00:07:40.836 "copy": false, 00:07:40.836 "nvme_iov_md": false 00:07:40.836 }, 00:07:40.836 "memory_domains": [ 00:07:40.836 { 00:07:40.836 "dma_device_id": "system", 00:07:40.836 "dma_device_type": 1 00:07:40.836 }, 00:07:40.836 { 00:07:40.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.836 "dma_device_type": 2 00:07:40.836 }, 00:07:40.836 { 00:07:40.836 "dma_device_id": "system", 00:07:40.836 "dma_device_type": 1 00:07:40.836 }, 00:07:40.836 { 00:07:40.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.836 "dma_device_type": 2 00:07:40.836 } 00:07:40.836 ], 00:07:40.836 "driver_specific": { 00:07:40.836 "raid": { 00:07:40.836 "uuid": "48637966-d827-400a-a6d4-56d4928b134e", 00:07:40.836 "strip_size_kb": 64, 00:07:40.836 "state": "online", 00:07:40.836 "raid_level": "concat", 00:07:40.836 "superblock": false, 00:07:40.836 "num_base_bdevs": 2, 00:07:40.836 "num_base_bdevs_discovered": 2, 00:07:40.836 "num_base_bdevs_operational": 2, 00:07:40.836 "base_bdevs_list": [ 00:07:40.836 { 00:07:40.836 "name": "BaseBdev1", 00:07:40.836 "uuid": "8a6a0fa7-df5a-4424-965b-e32341ac0def", 00:07:40.836 "is_configured": true, 00:07:40.836 "data_offset": 0, 00:07:40.836 "data_size": 65536 00:07:40.836 }, 00:07:40.836 { 00:07:40.836 "name": "BaseBdev2", 00:07:40.836 "uuid": "52a9cd14-b1cf-4545-8679-05aea887ae9c", 00:07:40.836 "is_configured": true, 00:07:40.836 "data_offset": 0, 00:07:40.836 "data_size": 65536 00:07:40.836 } 00:07:40.836 ] 00:07:40.836 } 00:07:40.836 } 00:07:40.836 }' 00:07:40.836 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.836 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:40.836 BaseBdev2' 00:07:40.836 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.095 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.096 [2024-12-07 17:24:14.308647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:41.096 [2024-12-07 17:24:14.308706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.096 [2024-12-07 17:24:14.308768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.096 "name": "Existed_Raid", 00:07:41.096 "uuid": "48637966-d827-400a-a6d4-56d4928b134e", 00:07:41.096 "strip_size_kb": 64, 00:07:41.096 "state": "offline", 00:07:41.096 "raid_level": "concat", 00:07:41.096 "superblock": false, 00:07:41.096 "num_base_bdevs": 2, 00:07:41.096 "num_base_bdevs_discovered": 1, 00:07:41.096 "num_base_bdevs_operational": 1, 00:07:41.096 "base_bdevs_list": [ 00:07:41.096 { 00:07:41.096 "name": null, 00:07:41.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.096 "is_configured": false, 00:07:41.096 "data_offset": 0, 00:07:41.096 "data_size": 65536 00:07:41.096 }, 00:07:41.096 { 00:07:41.096 "name": "BaseBdev2", 00:07:41.096 "uuid": "52a9cd14-b1cf-4545-8679-05aea887ae9c", 00:07:41.096 "is_configured": true, 00:07:41.096 "data_offset": 0, 00:07:41.096 "data_size": 65536 00:07:41.096 } 00:07:41.096 ] 00:07:41.096 }' 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.096 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.664 17:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.664 [2024-12-07 17:24:14.948688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:41.664 [2024-12-07 17:24:14.948865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:41.925 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.925 17:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:41.925 17:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.925 17:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61731 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61731 ']' 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61731 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61731 00:07:41.926 killing process with pid 61731 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61731' 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61731 00:07:41.926 [2024-12-07 17:24:15.150213] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.926 17:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61731 00:07:41.926 [2024-12-07 17:24:15.166618] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:43.304 ************************************ 00:07:43.304 END TEST raid_state_function_test 00:07:43.304 ************************************ 00:07:43.304 00:07:43.304 real 0m5.256s 00:07:43.304 user 0m7.462s 00:07:43.304 sys 0m0.933s 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.304 17:24:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:43.304 17:24:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.304 17:24:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.304 17:24:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.304 ************************************ 00:07:43.304 START TEST raid_state_function_test_sb 00:07:43.304 ************************************ 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:43.304 Process raid pid: 61990 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61990 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61990' 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61990 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61990 ']' 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.304 17:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.304 [2024-12-07 17:24:16.547471] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:43.304 [2024-12-07 17:24:16.547682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.563 [2024-12-07 17:24:16.707438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.563 [2024-12-07 17:24:16.818668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.821 [2024-12-07 17:24:17.020948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.821 [2024-12-07 17:24:17.020980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.080 [2024-12-07 17:24:17.380484] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.080 [2024-12-07 17:24:17.380539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.080 [2024-12-07 17:24:17.380549] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.080 [2024-12-07 17:24:17.380559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.080 "name": "Existed_Raid", 00:07:44.080 "uuid": "9ea287f4-d49f-4032-baf0-d34d5a67473e", 00:07:44.080 "strip_size_kb": 64, 00:07:44.080 "state": "configuring", 00:07:44.080 "raid_level": "concat", 00:07:44.080 "superblock": true, 00:07:44.080 "num_base_bdevs": 2, 00:07:44.080 "num_base_bdevs_discovered": 0, 00:07:44.080 "num_base_bdevs_operational": 2, 00:07:44.080 "base_bdevs_list": [ 00:07:44.080 { 00:07:44.080 "name": "BaseBdev1", 00:07:44.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.080 "is_configured": false, 00:07:44.080 "data_offset": 0, 00:07:44.080 "data_size": 0 00:07:44.080 }, 00:07:44.080 { 00:07:44.080 "name": "BaseBdev2", 00:07:44.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.080 "is_configured": false, 00:07:44.080 "data_offset": 0, 00:07:44.080 "data_size": 0 00:07:44.080 } 00:07:44.080 ] 00:07:44.080 }' 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.080 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 [2024-12-07 17:24:17.859612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.647 [2024-12-07 17:24:17.859718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 [2024-12-07 17:24:17.871578] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.647 [2024-12-07 17:24:17.871657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.647 [2024-12-07 17:24:17.871685] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.647 [2024-12-07 17:24:17.871712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 [2024-12-07 17:24:17.920986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.647 BaseBdev1 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 [ 00:07:44.647 { 00:07:44.647 "name": "BaseBdev1", 00:07:44.647 "aliases": [ 00:07:44.647 "8f58edbd-8ac6-41f3-8714-81cf4c30ad33" 00:07:44.647 ], 00:07:44.647 "product_name": "Malloc disk", 00:07:44.647 "block_size": 512, 00:07:44.647 "num_blocks": 65536, 00:07:44.647 "uuid": "8f58edbd-8ac6-41f3-8714-81cf4c30ad33", 00:07:44.647 "assigned_rate_limits": { 00:07:44.647 "rw_ios_per_sec": 0, 00:07:44.647 "rw_mbytes_per_sec": 0, 00:07:44.647 "r_mbytes_per_sec": 0, 00:07:44.647 "w_mbytes_per_sec": 0 00:07:44.647 }, 00:07:44.647 "claimed": true, 00:07:44.647 "claim_type": "exclusive_write", 00:07:44.647 "zoned": false, 00:07:44.647 "supported_io_types": { 00:07:44.647 "read": true, 00:07:44.647 "write": true, 00:07:44.647 "unmap": true, 00:07:44.647 "flush": true, 00:07:44.647 "reset": true, 00:07:44.647 "nvme_admin": false, 00:07:44.647 "nvme_io": false, 00:07:44.647 "nvme_io_md": false, 00:07:44.647 "write_zeroes": true, 00:07:44.647 "zcopy": true, 00:07:44.647 "get_zone_info": false, 00:07:44.647 "zone_management": false, 00:07:44.647 "zone_append": false, 00:07:44.647 "compare": false, 00:07:44.647 "compare_and_write": false, 00:07:44.647 "abort": true, 00:07:44.647 "seek_hole": false, 00:07:44.647 "seek_data": false, 00:07:44.647 "copy": true, 00:07:44.647 "nvme_iov_md": false 00:07:44.647 }, 00:07:44.647 "memory_domains": [ 00:07:44.647 { 00:07:44.647 "dma_device_id": "system", 00:07:44.647 "dma_device_type": 1 00:07:44.647 }, 00:07:44.647 { 00:07:44.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.647 "dma_device_type": 2 00:07:44.647 } 00:07:44.647 ], 00:07:44.647 "driver_specific": {} 00:07:44.647 } 00:07:44.647 ] 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 17:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.648 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.648 "name": "Existed_Raid", 00:07:44.648 "uuid": "d5281b42-2860-4b24-bb7d-7af7b2bf4983", 00:07:44.648 "strip_size_kb": 64, 00:07:44.648 "state": "configuring", 00:07:44.648 "raid_level": "concat", 00:07:44.648 "superblock": true, 00:07:44.648 "num_base_bdevs": 2, 00:07:44.648 "num_base_bdevs_discovered": 1, 00:07:44.648 "num_base_bdevs_operational": 2, 00:07:44.648 "base_bdevs_list": [ 00:07:44.648 { 00:07:44.648 "name": "BaseBdev1", 00:07:44.648 "uuid": "8f58edbd-8ac6-41f3-8714-81cf4c30ad33", 00:07:44.648 "is_configured": true, 00:07:44.648 "data_offset": 2048, 00:07:44.648 "data_size": 63488 00:07:44.648 }, 00:07:44.648 { 00:07:44.648 "name": "BaseBdev2", 00:07:44.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.648 "is_configured": false, 00:07:44.648 "data_offset": 0, 00:07:44.648 "data_size": 0 00:07:44.648 } 00:07:44.648 ] 00:07:44.648 }' 00:07:44.648 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.648 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.214 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.214 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.214 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.214 [2024-12-07 17:24:18.388258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.214 [2024-12-07 17:24:18.388399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:45.214 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.214 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.215 [2024-12-07 17:24:18.400251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.215 [2024-12-07 17:24:18.402178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.215 [2024-12-07 17:24:18.402266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.215 "name": "Existed_Raid", 00:07:45.215 "uuid": "c9e6f92a-2df8-4f30-ba7c-71ebf9ee3dd8", 00:07:45.215 "strip_size_kb": 64, 00:07:45.215 "state": "configuring", 00:07:45.215 "raid_level": "concat", 00:07:45.215 "superblock": true, 00:07:45.215 "num_base_bdevs": 2, 00:07:45.215 "num_base_bdevs_discovered": 1, 00:07:45.215 "num_base_bdevs_operational": 2, 00:07:45.215 "base_bdevs_list": [ 00:07:45.215 { 00:07:45.215 "name": "BaseBdev1", 00:07:45.215 "uuid": "8f58edbd-8ac6-41f3-8714-81cf4c30ad33", 00:07:45.215 "is_configured": true, 00:07:45.215 "data_offset": 2048, 00:07:45.215 "data_size": 63488 00:07:45.215 }, 00:07:45.215 { 00:07:45.215 "name": "BaseBdev2", 00:07:45.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.215 "is_configured": false, 00:07:45.215 "data_offset": 0, 00:07:45.215 "data_size": 0 00:07:45.215 } 00:07:45.215 ] 00:07:45.215 }' 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.215 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.781 [2024-12-07 17:24:18.896206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:45.781 [2024-12-07 17:24:18.896555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:45.781 [2024-12-07 17:24:18.896605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:45.781 [2024-12-07 17:24:18.896898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.781 [2024-12-07 17:24:18.897124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:45.781 [2024-12-07 17:24:18.897174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:07:45.781 id_bdev 0x617000007e80 00:07:45.781 [2024-12-07 17:24:18.897404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.781 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.781 [ 00:07:45.781 { 00:07:45.781 "name": "BaseBdev2", 00:07:45.781 "aliases": [ 00:07:45.781 "6da2ee94-7a53-4da2-b38b-6dc81a734f80" 00:07:45.781 ], 00:07:45.781 "product_name": "Malloc disk", 00:07:45.781 "block_size": 512, 00:07:45.781 "num_blocks": 65536, 00:07:45.781 "uuid": "6da2ee94-7a53-4da2-b38b-6dc81a734f80", 00:07:45.781 "assigned_rate_limits": { 00:07:45.781 "rw_ios_per_sec": 0, 00:07:45.781 "rw_mbytes_per_sec": 0, 00:07:45.782 "r_mbytes_per_sec": 0, 00:07:45.782 "w_mbytes_per_sec": 0 00:07:45.782 }, 00:07:45.782 "claimed": true, 00:07:45.782 "claim_type": "exclusive_write", 00:07:45.782 "zoned": false, 00:07:45.782 "supported_io_types": { 00:07:45.782 "read": true, 00:07:45.782 "write": true, 00:07:45.782 "unmap": true, 00:07:45.782 "flush": true, 00:07:45.782 "reset": true, 00:07:45.782 "nvme_admin": false, 00:07:45.782 "nvme_io": false, 00:07:45.782 "nvme_io_md": false, 00:07:45.782 "write_zeroes": true, 00:07:45.782 "zcopy": true, 00:07:45.782 "get_zone_info": false, 00:07:45.782 "zone_management": false, 00:07:45.782 "zone_append": false, 00:07:45.782 "compare": false, 00:07:45.782 "compare_and_write": false, 00:07:45.782 "abort": true, 00:07:45.782 "seek_hole": false, 00:07:45.782 "seek_data": false, 00:07:45.782 "copy": true, 00:07:45.782 "nvme_iov_md": false 00:07:45.782 }, 00:07:45.782 "memory_domains": [ 00:07:45.782 { 00:07:45.782 "dma_device_id": "system", 00:07:45.782 "dma_device_type": 1 00:07:45.782 }, 00:07:45.782 { 00:07:45.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.782 "dma_device_type": 2 00:07:45.782 } 00:07:45.782 ], 00:07:45.782 "driver_specific": {} 00:07:45.782 } 00:07:45.782 ] 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.782 "name": "Existed_Raid", 00:07:45.782 "uuid": "c9e6f92a-2df8-4f30-ba7c-71ebf9ee3dd8", 00:07:45.782 "strip_size_kb": 64, 00:07:45.782 "state": "online", 00:07:45.782 "raid_level": "concat", 00:07:45.782 "superblock": true, 00:07:45.782 "num_base_bdevs": 2, 00:07:45.782 "num_base_bdevs_discovered": 2, 00:07:45.782 "num_base_bdevs_operational": 2, 00:07:45.782 "base_bdevs_list": [ 00:07:45.782 { 00:07:45.782 "name": "BaseBdev1", 00:07:45.782 "uuid": "8f58edbd-8ac6-41f3-8714-81cf4c30ad33", 00:07:45.782 "is_configured": true, 00:07:45.782 "data_offset": 2048, 00:07:45.782 "data_size": 63488 00:07:45.782 }, 00:07:45.782 { 00:07:45.782 "name": "BaseBdev2", 00:07:45.782 "uuid": "6da2ee94-7a53-4da2-b38b-6dc81a734f80", 00:07:45.782 "is_configured": true, 00:07:45.782 "data_offset": 2048, 00:07:45.782 "data_size": 63488 00:07:45.782 } 00:07:45.782 ] 00:07:45.782 }' 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.782 17:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.040 [2024-12-07 17:24:19.391689] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.040 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.299 "name": "Existed_Raid", 00:07:46.299 "aliases": [ 00:07:46.299 "c9e6f92a-2df8-4f30-ba7c-71ebf9ee3dd8" 00:07:46.299 ], 00:07:46.299 "product_name": "Raid Volume", 00:07:46.299 "block_size": 512, 00:07:46.299 "num_blocks": 126976, 00:07:46.299 "uuid": "c9e6f92a-2df8-4f30-ba7c-71ebf9ee3dd8", 00:07:46.299 "assigned_rate_limits": { 00:07:46.299 "rw_ios_per_sec": 0, 00:07:46.299 "rw_mbytes_per_sec": 0, 00:07:46.299 "r_mbytes_per_sec": 0, 00:07:46.299 "w_mbytes_per_sec": 0 00:07:46.299 }, 00:07:46.299 "claimed": false, 00:07:46.299 "zoned": false, 00:07:46.299 "supported_io_types": { 00:07:46.299 "read": true, 00:07:46.299 "write": true, 00:07:46.299 "unmap": true, 00:07:46.299 "flush": true, 00:07:46.299 "reset": true, 00:07:46.299 "nvme_admin": false, 00:07:46.299 "nvme_io": false, 00:07:46.299 "nvme_io_md": false, 00:07:46.299 "write_zeroes": true, 00:07:46.299 "zcopy": false, 00:07:46.299 "get_zone_info": false, 00:07:46.299 "zone_management": false, 00:07:46.299 "zone_append": false, 00:07:46.299 "compare": false, 00:07:46.299 "compare_and_write": false, 00:07:46.299 "abort": false, 00:07:46.299 "seek_hole": false, 00:07:46.299 "seek_data": false, 00:07:46.299 "copy": false, 00:07:46.299 "nvme_iov_md": false 00:07:46.299 }, 00:07:46.299 "memory_domains": [ 00:07:46.299 { 00:07:46.299 "dma_device_id": "system", 00:07:46.299 "dma_device_type": 1 00:07:46.299 }, 00:07:46.299 { 00:07:46.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.299 "dma_device_type": 2 00:07:46.299 }, 00:07:46.299 { 00:07:46.299 "dma_device_id": "system", 00:07:46.299 "dma_device_type": 1 00:07:46.299 }, 00:07:46.299 { 00:07:46.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.299 "dma_device_type": 2 00:07:46.299 } 00:07:46.299 ], 00:07:46.299 "driver_specific": { 00:07:46.299 "raid": { 00:07:46.299 "uuid": "c9e6f92a-2df8-4f30-ba7c-71ebf9ee3dd8", 00:07:46.299 "strip_size_kb": 64, 00:07:46.299 "state": "online", 00:07:46.299 "raid_level": "concat", 00:07:46.299 "superblock": true, 00:07:46.299 "num_base_bdevs": 2, 00:07:46.299 "num_base_bdevs_discovered": 2, 00:07:46.299 "num_base_bdevs_operational": 2, 00:07:46.299 "base_bdevs_list": [ 00:07:46.299 { 00:07:46.299 "name": "BaseBdev1", 00:07:46.299 "uuid": "8f58edbd-8ac6-41f3-8714-81cf4c30ad33", 00:07:46.299 "is_configured": true, 00:07:46.299 "data_offset": 2048, 00:07:46.299 "data_size": 63488 00:07:46.299 }, 00:07:46.299 { 00:07:46.299 "name": "BaseBdev2", 00:07:46.299 "uuid": "6da2ee94-7a53-4da2-b38b-6dc81a734f80", 00:07:46.299 "is_configured": true, 00:07:46.299 "data_offset": 2048, 00:07:46.299 "data_size": 63488 00:07:46.299 } 00:07:46.299 ] 00:07:46.299 } 00:07:46.299 } 00:07:46.299 }' 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:46.299 BaseBdev2' 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:46.299 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.300 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.300 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.300 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.300 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.300 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.300 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:46.300 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.300 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.300 [2024-12-07 17:24:19.611177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:46.300 [2024-12-07 17:24:19.611215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.300 [2024-12-07 17:24:19.611272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.559 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.560 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.560 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.560 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.560 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.560 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.560 "name": "Existed_Raid", 00:07:46.560 "uuid": "c9e6f92a-2df8-4f30-ba7c-71ebf9ee3dd8", 00:07:46.560 "strip_size_kb": 64, 00:07:46.560 "state": "offline", 00:07:46.560 "raid_level": "concat", 00:07:46.560 "superblock": true, 00:07:46.560 "num_base_bdevs": 2, 00:07:46.560 "num_base_bdevs_discovered": 1, 00:07:46.560 "num_base_bdevs_operational": 1, 00:07:46.560 "base_bdevs_list": [ 00:07:46.560 { 00:07:46.560 "name": null, 00:07:46.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.560 "is_configured": false, 00:07:46.560 "data_offset": 0, 00:07:46.560 "data_size": 63488 00:07:46.560 }, 00:07:46.560 { 00:07:46.560 "name": "BaseBdev2", 00:07:46.560 "uuid": "6da2ee94-7a53-4da2-b38b-6dc81a734f80", 00:07:46.560 "is_configured": true, 00:07:46.560 "data_offset": 2048, 00:07:46.560 "data_size": 63488 00:07:46.560 } 00:07:46.560 ] 00:07:46.560 }' 00:07:46.560 17:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.560 17:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.819 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:46.819 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.819 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.819 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:46.819 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.819 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.819 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.078 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:47.078 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:47.078 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:47.078 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.078 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.078 [2024-12-07 17:24:20.222709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:47.078 [2024-12-07 17:24:20.222815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61990 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61990 ']' 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61990 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61990 00:07:47.079 killing process with pid 61990 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61990' 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61990 00:07:47.079 [2024-12-07 17:24:20.411498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.079 17:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61990 00:07:47.079 [2024-12-07 17:24:20.429171] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.455 17:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.455 00:07:48.455 real 0m5.108s 00:07:48.455 user 0m7.397s 00:07:48.455 sys 0m0.825s 00:07:48.455 ************************************ 00:07:48.455 END TEST raid_state_function_test_sb 00:07:48.455 ************************************ 00:07:48.455 17:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.455 17:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.455 17:24:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:48.455 17:24:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:48.455 17:24:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.455 17:24:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.455 ************************************ 00:07:48.455 START TEST raid_superblock_test 00:07:48.455 ************************************ 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62242 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62242 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62242 ']' 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.455 17:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.455 [2024-12-07 17:24:21.730327] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:48.455 [2024-12-07 17:24:21.730521] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62242 ] 00:07:48.714 [2024-12-07 17:24:21.901183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.714 [2024-12-07 17:24:22.014912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.974 [2024-12-07 17:24:22.215015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.974 [2024-12-07 17:24:22.215154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.383 malloc1 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.383 [2024-12-07 17:24:22.602835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:49.383 [2024-12-07 17:24:22.602945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.383 [2024-12-07 17:24:22.602988] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:49.383 [2024-12-07 17:24:22.603042] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.383 [2024-12-07 17:24:22.605386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.383 [2024-12-07 17:24:22.605458] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:49.383 pt1 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.383 malloc2 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.383 [2024-12-07 17:24:22.659212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:49.383 [2024-12-07 17:24:22.659326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.383 [2024-12-07 17:24:22.659373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:49.383 [2024-12-07 17:24:22.659411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.383 [2024-12-07 17:24:22.661620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.383 [2024-12-07 17:24:22.661703] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:49.383 pt2 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.383 [2024-12-07 17:24:22.671249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:49.383 [2024-12-07 17:24:22.673150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:49.383 [2024-12-07 17:24:22.673346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:49.383 [2024-12-07 17:24:22.673393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:49.383 [2024-12-07 17:24:22.673673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:49.383 [2024-12-07 17:24:22.673869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:49.383 [2024-12-07 17:24:22.673912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:49.383 [2024-12-07 17:24:22.674107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.383 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.641 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.641 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.641 "name": "raid_bdev1", 00:07:49.641 "uuid": "93158577-28f9-4e06-8eb8-d3ac575a4127", 00:07:49.641 "strip_size_kb": 64, 00:07:49.641 "state": "online", 00:07:49.641 "raid_level": "concat", 00:07:49.641 "superblock": true, 00:07:49.641 "num_base_bdevs": 2, 00:07:49.641 "num_base_bdevs_discovered": 2, 00:07:49.641 "num_base_bdevs_operational": 2, 00:07:49.641 "base_bdevs_list": [ 00:07:49.641 { 00:07:49.641 "name": "pt1", 00:07:49.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.641 "is_configured": true, 00:07:49.641 "data_offset": 2048, 00:07:49.641 "data_size": 63488 00:07:49.641 }, 00:07:49.641 { 00:07:49.641 "name": "pt2", 00:07:49.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.641 "is_configured": true, 00:07:49.641 "data_offset": 2048, 00:07:49.641 "data_size": 63488 00:07:49.641 } 00:07:49.641 ] 00:07:49.641 }' 00:07:49.641 17:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.641 17:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.900 [2024-12-07 17:24:23.134796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:49.900 "name": "raid_bdev1", 00:07:49.900 "aliases": [ 00:07:49.900 "93158577-28f9-4e06-8eb8-d3ac575a4127" 00:07:49.900 ], 00:07:49.900 "product_name": "Raid Volume", 00:07:49.900 "block_size": 512, 00:07:49.900 "num_blocks": 126976, 00:07:49.900 "uuid": "93158577-28f9-4e06-8eb8-d3ac575a4127", 00:07:49.900 "assigned_rate_limits": { 00:07:49.900 "rw_ios_per_sec": 0, 00:07:49.900 "rw_mbytes_per_sec": 0, 00:07:49.900 "r_mbytes_per_sec": 0, 00:07:49.900 "w_mbytes_per_sec": 0 00:07:49.900 }, 00:07:49.900 "claimed": false, 00:07:49.900 "zoned": false, 00:07:49.900 "supported_io_types": { 00:07:49.900 "read": true, 00:07:49.900 "write": true, 00:07:49.900 "unmap": true, 00:07:49.900 "flush": true, 00:07:49.900 "reset": true, 00:07:49.900 "nvme_admin": false, 00:07:49.900 "nvme_io": false, 00:07:49.900 "nvme_io_md": false, 00:07:49.900 "write_zeroes": true, 00:07:49.900 "zcopy": false, 00:07:49.900 "get_zone_info": false, 00:07:49.900 "zone_management": false, 00:07:49.900 "zone_append": false, 00:07:49.900 "compare": false, 00:07:49.900 "compare_and_write": false, 00:07:49.900 "abort": false, 00:07:49.900 "seek_hole": false, 00:07:49.900 "seek_data": false, 00:07:49.900 "copy": false, 00:07:49.900 "nvme_iov_md": false 00:07:49.900 }, 00:07:49.900 "memory_domains": [ 00:07:49.900 { 00:07:49.900 "dma_device_id": "system", 00:07:49.900 "dma_device_type": 1 00:07:49.900 }, 00:07:49.900 { 00:07:49.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.900 "dma_device_type": 2 00:07:49.900 }, 00:07:49.900 { 00:07:49.900 "dma_device_id": "system", 00:07:49.900 "dma_device_type": 1 00:07:49.900 }, 00:07:49.900 { 00:07:49.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.900 "dma_device_type": 2 00:07:49.900 } 00:07:49.900 ], 00:07:49.900 "driver_specific": { 00:07:49.900 "raid": { 00:07:49.900 "uuid": "93158577-28f9-4e06-8eb8-d3ac575a4127", 00:07:49.900 "strip_size_kb": 64, 00:07:49.900 "state": "online", 00:07:49.900 "raid_level": "concat", 00:07:49.900 "superblock": true, 00:07:49.900 "num_base_bdevs": 2, 00:07:49.900 "num_base_bdevs_discovered": 2, 00:07:49.900 "num_base_bdevs_operational": 2, 00:07:49.900 "base_bdevs_list": [ 00:07:49.900 { 00:07:49.900 "name": "pt1", 00:07:49.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.900 "is_configured": true, 00:07:49.900 "data_offset": 2048, 00:07:49.900 "data_size": 63488 00:07:49.900 }, 00:07:49.900 { 00:07:49.900 "name": "pt2", 00:07:49.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.900 "is_configured": true, 00:07:49.900 "data_offset": 2048, 00:07:49.900 "data_size": 63488 00:07:49.900 } 00:07:49.900 ] 00:07:49.900 } 00:07:49.900 } 00:07:49.900 }' 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:49.900 pt2' 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.900 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.159 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.159 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.159 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.159 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.159 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:50.159 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.159 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.159 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.159 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:50.160 [2024-12-07 17:24:23.362340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=93158577-28f9-4e06-8eb8-d3ac575a4127 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 93158577-28f9-4e06-8eb8-d3ac575a4127 ']' 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.160 [2024-12-07 17:24:23.413994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.160 [2024-12-07 17:24:23.414018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.160 [2024-12-07 17:24:23.414102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.160 [2024-12-07 17:24:23.414154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.160 [2024-12-07 17:24:23.414167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:50.160 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.420 [2024-12-07 17:24:23.549781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:50.420 [2024-12-07 17:24:23.551742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:50.420 [2024-12-07 17:24:23.551810] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:50.420 [2024-12-07 17:24:23.551866] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:50.420 [2024-12-07 17:24:23.551883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.420 [2024-12-07 17:24:23.551895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:50.420 request: 00:07:50.420 { 00:07:50.420 "name": "raid_bdev1", 00:07:50.420 "raid_level": "concat", 00:07:50.420 "base_bdevs": [ 00:07:50.420 "malloc1", 00:07:50.420 "malloc2" 00:07:50.420 ], 00:07:50.420 "strip_size_kb": 64, 00:07:50.420 "superblock": false, 00:07:50.420 "method": "bdev_raid_create", 00:07:50.420 "req_id": 1 00:07:50.420 } 00:07:50.420 Got JSON-RPC error response 00:07:50.420 response: 00:07:50.420 { 00:07:50.420 "code": -17, 00:07:50.420 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:50.420 } 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.420 [2024-12-07 17:24:23.617670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:50.420 [2024-12-07 17:24:23.617772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.420 [2024-12-07 17:24:23.617820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:50.420 [2024-12-07 17:24:23.617856] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.420 [2024-12-07 17:24:23.620332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.420 [2024-12-07 17:24:23.620422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:50.420 [2024-12-07 17:24:23.620555] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:50.420 [2024-12-07 17:24:23.620637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:50.420 pt1 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.420 "name": "raid_bdev1", 00:07:50.420 "uuid": "93158577-28f9-4e06-8eb8-d3ac575a4127", 00:07:50.420 "strip_size_kb": 64, 00:07:50.420 "state": "configuring", 00:07:50.420 "raid_level": "concat", 00:07:50.420 "superblock": true, 00:07:50.420 "num_base_bdevs": 2, 00:07:50.420 "num_base_bdevs_discovered": 1, 00:07:50.420 "num_base_bdevs_operational": 2, 00:07:50.420 "base_bdevs_list": [ 00:07:50.420 { 00:07:50.420 "name": "pt1", 00:07:50.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.420 "is_configured": true, 00:07:50.420 "data_offset": 2048, 00:07:50.420 "data_size": 63488 00:07:50.420 }, 00:07:50.420 { 00:07:50.420 "name": null, 00:07:50.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.420 "is_configured": false, 00:07:50.420 "data_offset": 2048, 00:07:50.420 "data_size": 63488 00:07:50.420 } 00:07:50.420 ] 00:07:50.420 }' 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.420 17:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.680 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:50.680 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:50.680 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.680 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.680 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.680 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.939 [2024-12-07 17:24:24.060981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.939 [2024-12-07 17:24:24.061057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.939 [2024-12-07 17:24:24.061081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:50.939 [2024-12-07 17:24:24.061094] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.939 [2024-12-07 17:24:24.061585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.939 [2024-12-07 17:24:24.061622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.939 [2024-12-07 17:24:24.061711] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:50.939 [2024-12-07 17:24:24.061744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.939 [2024-12-07 17:24:24.061881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.939 [2024-12-07 17:24:24.061893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.939 [2024-12-07 17:24:24.062166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:50.939 [2024-12-07 17:24:24.062319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.939 [2024-12-07 17:24:24.062328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:50.939 [2024-12-07 17:24:24.062471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.939 pt2 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.939 "name": "raid_bdev1", 00:07:50.939 "uuid": "93158577-28f9-4e06-8eb8-d3ac575a4127", 00:07:50.939 "strip_size_kb": 64, 00:07:50.939 "state": "online", 00:07:50.939 "raid_level": "concat", 00:07:50.939 "superblock": true, 00:07:50.939 "num_base_bdevs": 2, 00:07:50.939 "num_base_bdevs_discovered": 2, 00:07:50.939 "num_base_bdevs_operational": 2, 00:07:50.939 "base_bdevs_list": [ 00:07:50.939 { 00:07:50.939 "name": "pt1", 00:07:50.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.939 "is_configured": true, 00:07:50.939 "data_offset": 2048, 00:07:50.939 "data_size": 63488 00:07:50.939 }, 00:07:50.939 { 00:07:50.939 "name": "pt2", 00:07:50.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.939 "is_configured": true, 00:07:50.939 "data_offset": 2048, 00:07:50.939 "data_size": 63488 00:07:50.939 } 00:07:50.939 ] 00:07:50.939 }' 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.939 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.199 [2024-12-07 17:24:24.524457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.199 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.199 "name": "raid_bdev1", 00:07:51.199 "aliases": [ 00:07:51.199 "93158577-28f9-4e06-8eb8-d3ac575a4127" 00:07:51.199 ], 00:07:51.199 "product_name": "Raid Volume", 00:07:51.199 "block_size": 512, 00:07:51.199 "num_blocks": 126976, 00:07:51.199 "uuid": "93158577-28f9-4e06-8eb8-d3ac575a4127", 00:07:51.199 "assigned_rate_limits": { 00:07:51.199 "rw_ios_per_sec": 0, 00:07:51.199 "rw_mbytes_per_sec": 0, 00:07:51.199 "r_mbytes_per_sec": 0, 00:07:51.199 "w_mbytes_per_sec": 0 00:07:51.199 }, 00:07:51.199 "claimed": false, 00:07:51.199 "zoned": false, 00:07:51.199 "supported_io_types": { 00:07:51.199 "read": true, 00:07:51.199 "write": true, 00:07:51.199 "unmap": true, 00:07:51.199 "flush": true, 00:07:51.199 "reset": true, 00:07:51.199 "nvme_admin": false, 00:07:51.199 "nvme_io": false, 00:07:51.199 "nvme_io_md": false, 00:07:51.199 "write_zeroes": true, 00:07:51.200 "zcopy": false, 00:07:51.200 "get_zone_info": false, 00:07:51.200 "zone_management": false, 00:07:51.200 "zone_append": false, 00:07:51.200 "compare": false, 00:07:51.200 "compare_and_write": false, 00:07:51.200 "abort": false, 00:07:51.200 "seek_hole": false, 00:07:51.200 "seek_data": false, 00:07:51.200 "copy": false, 00:07:51.200 "nvme_iov_md": false 00:07:51.200 }, 00:07:51.200 "memory_domains": [ 00:07:51.200 { 00:07:51.200 "dma_device_id": "system", 00:07:51.200 "dma_device_type": 1 00:07:51.200 }, 00:07:51.200 { 00:07:51.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.200 "dma_device_type": 2 00:07:51.200 }, 00:07:51.200 { 00:07:51.200 "dma_device_id": "system", 00:07:51.200 "dma_device_type": 1 00:07:51.200 }, 00:07:51.200 { 00:07:51.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.200 "dma_device_type": 2 00:07:51.200 } 00:07:51.200 ], 00:07:51.200 "driver_specific": { 00:07:51.200 "raid": { 00:07:51.200 "uuid": "93158577-28f9-4e06-8eb8-d3ac575a4127", 00:07:51.200 "strip_size_kb": 64, 00:07:51.200 "state": "online", 00:07:51.200 "raid_level": "concat", 00:07:51.200 "superblock": true, 00:07:51.200 "num_base_bdevs": 2, 00:07:51.200 "num_base_bdevs_discovered": 2, 00:07:51.200 "num_base_bdevs_operational": 2, 00:07:51.200 "base_bdevs_list": [ 00:07:51.200 { 00:07:51.200 "name": "pt1", 00:07:51.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.200 "is_configured": true, 00:07:51.200 "data_offset": 2048, 00:07:51.200 "data_size": 63488 00:07:51.200 }, 00:07:51.200 { 00:07:51.200 "name": "pt2", 00:07:51.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.200 "is_configured": true, 00:07:51.200 "data_offset": 2048, 00:07:51.200 "data_size": 63488 00:07:51.200 } 00:07:51.200 ] 00:07:51.200 } 00:07:51.200 } 00:07:51.200 }' 00:07:51.200 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.460 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:51.460 pt2' 00:07:51.460 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.460 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.461 [2024-12-07 17:24:24.771975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 93158577-28f9-4e06-8eb8-d3ac575a4127 '!=' 93158577-28f9-4e06-8eb8-d3ac575a4127 ']' 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62242 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62242 ']' 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62242 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.461 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62242 00:07:51.722 killing process with pid 62242 00:07:51.722 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.722 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.722 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62242' 00:07:51.722 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62242 00:07:51.722 [2024-12-07 17:24:24.863402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.722 [2024-12-07 17:24:24.863499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.722 [2024-12-07 17:24:24.863568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.722 17:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62242 00:07:51.722 [2024-12-07 17:24:24.863583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:51.722 [2024-12-07 17:24:25.075816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.104 17:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:53.104 00:07:53.104 real 0m4.568s 00:07:53.104 user 0m6.415s 00:07:53.104 sys 0m0.781s 00:07:53.104 17:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.104 17:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.104 ************************************ 00:07:53.105 END TEST raid_superblock_test 00:07:53.105 ************************************ 00:07:53.105 17:24:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:53.105 17:24:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:53.105 17:24:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.105 17:24:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.105 ************************************ 00:07:53.105 START TEST raid_read_error_test 00:07:53.105 ************************************ 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TYyVzS1Fn3 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62448 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62448 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62448 ']' 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.105 17:24:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.105 [2024-12-07 17:24:26.369002] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:53.105 [2024-12-07 17:24:26.369187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62448 ] 00:07:53.365 [2024-12-07 17:24:26.541230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.365 [2024-12-07 17:24:26.653143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.626 [2024-12-07 17:24:26.851341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.626 [2024-12-07 17:24:26.851467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.887 BaseBdev1_malloc 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.887 true 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.887 [2024-12-07 17:24:27.254194] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:53.887 [2024-12-07 17:24:27.254250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.887 [2024-12-07 17:24:27.254272] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:53.887 [2024-12-07 17:24:27.254283] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.887 [2024-12-07 17:24:27.256531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.887 [2024-12-07 17:24:27.256642] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:53.887 BaseBdev1 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.887 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.147 BaseBdev2_malloc 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.147 true 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.147 [2024-12-07 17:24:27.322746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:54.147 [2024-12-07 17:24:27.322799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.147 [2024-12-07 17:24:27.322816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:54.147 [2024-12-07 17:24:27.322826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.147 [2024-12-07 17:24:27.325045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.147 [2024-12-07 17:24:27.325079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:54.147 BaseBdev2 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.147 [2024-12-07 17:24:27.334785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.147 [2024-12-07 17:24:27.336616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.147 [2024-12-07 17:24:27.336806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:54.147 [2024-12-07 17:24:27.336821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.147 [2024-12-07 17:24:27.337066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:54.147 [2024-12-07 17:24:27.337250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:54.147 [2024-12-07 17:24:27.337262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:54.147 [2024-12-07 17:24:27.337406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.147 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.147 "name": "raid_bdev1", 00:07:54.147 "uuid": "d61a3558-f714-44aa-b624-3dd1f8202490", 00:07:54.147 "strip_size_kb": 64, 00:07:54.147 "state": "online", 00:07:54.147 "raid_level": "concat", 00:07:54.148 "superblock": true, 00:07:54.148 "num_base_bdevs": 2, 00:07:54.148 "num_base_bdevs_discovered": 2, 00:07:54.148 "num_base_bdevs_operational": 2, 00:07:54.148 "base_bdevs_list": [ 00:07:54.148 { 00:07:54.148 "name": "BaseBdev1", 00:07:54.148 "uuid": "7f7ad374-6f2f-5d51-86ab-e33ae4e8fb79", 00:07:54.148 "is_configured": true, 00:07:54.148 "data_offset": 2048, 00:07:54.148 "data_size": 63488 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "name": "BaseBdev2", 00:07:54.148 "uuid": "0edd12e7-45a1-51c9-9662-37a916b32012", 00:07:54.148 "is_configured": true, 00:07:54.148 "data_offset": 2048, 00:07:54.148 "data_size": 63488 00:07:54.148 } 00:07:54.148 ] 00:07:54.148 }' 00:07:54.148 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.148 17:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.408 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:54.408 17:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:54.669 [2024-12-07 17:24:27.867174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.610 "name": "raid_bdev1", 00:07:55.610 "uuid": "d61a3558-f714-44aa-b624-3dd1f8202490", 00:07:55.610 "strip_size_kb": 64, 00:07:55.610 "state": "online", 00:07:55.610 "raid_level": "concat", 00:07:55.610 "superblock": true, 00:07:55.610 "num_base_bdevs": 2, 00:07:55.610 "num_base_bdevs_discovered": 2, 00:07:55.610 "num_base_bdevs_operational": 2, 00:07:55.610 "base_bdevs_list": [ 00:07:55.610 { 00:07:55.610 "name": "BaseBdev1", 00:07:55.610 "uuid": "7f7ad374-6f2f-5d51-86ab-e33ae4e8fb79", 00:07:55.610 "is_configured": true, 00:07:55.610 "data_offset": 2048, 00:07:55.610 "data_size": 63488 00:07:55.610 }, 00:07:55.610 { 00:07:55.610 "name": "BaseBdev2", 00:07:55.610 "uuid": "0edd12e7-45a1-51c9-9662-37a916b32012", 00:07:55.610 "is_configured": true, 00:07:55.610 "data_offset": 2048, 00:07:55.610 "data_size": 63488 00:07:55.610 } 00:07:55.610 ] 00:07:55.610 }' 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.610 17:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.871 17:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.871 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.871 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.871 [2024-12-07 17:24:29.246750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.871 [2024-12-07 17:24:29.246838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.871 [2024-12-07 17:24:29.249742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.871 [2024-12-07 17:24:29.249828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.871 [2024-12-07 17:24:29.249878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.871 [2024-12-07 17:24:29.249923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:56.130 { 00:07:56.130 "results": [ 00:07:56.130 { 00:07:56.130 "job": "raid_bdev1", 00:07:56.130 "core_mask": "0x1", 00:07:56.130 "workload": "randrw", 00:07:56.130 "percentage": 50, 00:07:56.130 "status": "finished", 00:07:56.130 "queue_depth": 1, 00:07:56.130 "io_size": 131072, 00:07:56.130 "runtime": 1.380767, 00:07:56.130 "iops": 15817.295749391462, 00:07:56.130 "mibps": 1977.1619686739327, 00:07:56.130 "io_failed": 1, 00:07:56.130 "io_timeout": 0, 00:07:56.130 "avg_latency_us": 87.3644582951538, 00:07:56.130 "min_latency_us": 25.2646288209607, 00:07:56.130 "max_latency_us": 1373.6803493449781 00:07:56.130 } 00:07:56.130 ], 00:07:56.130 "core_count": 1 00:07:56.130 } 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62448 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62448 ']' 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62448 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62448 00:07:56.130 killing process with pid 62448 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62448' 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62448 00:07:56.130 [2024-12-07 17:24:29.299469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.130 17:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62448 00:07:56.130 [2024-12-07 17:24:29.438819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.512 17:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TYyVzS1Fn3 00:07:57.512 17:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:57.512 17:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:57.512 17:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:57.512 17:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:57.512 17:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.512 17:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.512 ************************************ 00:07:57.512 END TEST raid_read_error_test 00:07:57.512 ************************************ 00:07:57.512 17:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:57.512 00:07:57.512 real 0m4.366s 00:07:57.512 user 0m5.226s 00:07:57.512 sys 0m0.540s 00:07:57.512 17:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.512 17:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.512 17:24:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:57.512 17:24:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:57.512 17:24:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.512 17:24:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.512 ************************************ 00:07:57.512 START TEST raid_write_error_test 00:07:57.512 ************************************ 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7BNzRdungs 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62588 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62588 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62588 ']' 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.512 17:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.512 [2024-12-07 17:24:30.806836] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:57.512 [2024-12-07 17:24:30.806957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62588 ] 00:07:57.772 [2024-12-07 17:24:30.979139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.772 [2024-12-07 17:24:31.091960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.031 [2024-12-07 17:24:31.298868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.031 [2024-12-07 17:24:31.298946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.291 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.291 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:58.291 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.291 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.291 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.291 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.551 BaseBdev1_malloc 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.551 true 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.551 [2024-12-07 17:24:31.706471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.551 [2024-12-07 17:24:31.706526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.551 [2024-12-07 17:24:31.706545] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:58.551 [2024-12-07 17:24:31.706555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.551 [2024-12-07 17:24:31.708781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.551 [2024-12-07 17:24:31.708820] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.551 BaseBdev1 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.551 BaseBdev2_malloc 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.551 true 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.551 [2024-12-07 17:24:31.773894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:58.551 [2024-12-07 17:24:31.774025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.551 [2024-12-07 17:24:31.774046] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:58.551 [2024-12-07 17:24:31.774056] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.551 [2024-12-07 17:24:31.776337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.551 [2024-12-07 17:24:31.776375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:58.551 BaseBdev2 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.551 [2024-12-07 17:24:31.785948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.551 [2024-12-07 17:24:31.787728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.551 [2024-12-07 17:24:31.787945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.551 [2024-12-07 17:24:31.787962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.551 [2024-12-07 17:24:31.788193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:58.551 [2024-12-07 17:24:31.788377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.551 [2024-12-07 17:24:31.788454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:58.551 [2024-12-07 17:24:31.788632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.551 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.551 "name": "raid_bdev1", 00:07:58.551 "uuid": "ee72ca04-bb4e-45a6-bc8d-531f04bfd7a2", 00:07:58.551 "strip_size_kb": 64, 00:07:58.551 "state": "online", 00:07:58.551 "raid_level": "concat", 00:07:58.551 "superblock": true, 00:07:58.551 "num_base_bdevs": 2, 00:07:58.551 "num_base_bdevs_discovered": 2, 00:07:58.551 "num_base_bdevs_operational": 2, 00:07:58.551 "base_bdevs_list": [ 00:07:58.551 { 00:07:58.551 "name": "BaseBdev1", 00:07:58.551 "uuid": "27010583-5167-52da-93bf-8dd9c6982991", 00:07:58.551 "is_configured": true, 00:07:58.551 "data_offset": 2048, 00:07:58.551 "data_size": 63488 00:07:58.551 }, 00:07:58.551 { 00:07:58.551 "name": "BaseBdev2", 00:07:58.551 "uuid": "6ec65510-3f3e-5a84-a1fe-c8a0b4320551", 00:07:58.551 "is_configured": true, 00:07:58.551 "data_offset": 2048, 00:07:58.551 "data_size": 63488 00:07:58.552 } 00:07:58.552 ] 00:07:58.552 }' 00:07:58.552 17:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.552 17:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.121 17:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:59.121 17:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:59.121 [2024-12-07 17:24:32.326407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.060 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.060 "name": "raid_bdev1", 00:08:00.060 "uuid": "ee72ca04-bb4e-45a6-bc8d-531f04bfd7a2", 00:08:00.060 "strip_size_kb": 64, 00:08:00.060 "state": "online", 00:08:00.060 "raid_level": "concat", 00:08:00.060 "superblock": true, 00:08:00.060 "num_base_bdevs": 2, 00:08:00.060 "num_base_bdevs_discovered": 2, 00:08:00.060 "num_base_bdevs_operational": 2, 00:08:00.060 "base_bdevs_list": [ 00:08:00.060 { 00:08:00.060 "name": "BaseBdev1", 00:08:00.060 "uuid": "27010583-5167-52da-93bf-8dd9c6982991", 00:08:00.060 "is_configured": true, 00:08:00.060 "data_offset": 2048, 00:08:00.060 "data_size": 63488 00:08:00.060 }, 00:08:00.060 { 00:08:00.060 "name": "BaseBdev2", 00:08:00.060 "uuid": "6ec65510-3f3e-5a84-a1fe-c8a0b4320551", 00:08:00.060 "is_configured": true, 00:08:00.061 "data_offset": 2048, 00:08:00.061 "data_size": 63488 00:08:00.061 } 00:08:00.061 ] 00:08:00.061 }' 00:08:00.061 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.061 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.320 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.320 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.320 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.320 [2024-12-07 17:24:33.694837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.320 [2024-12-07 17:24:33.694876] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.320 [2024-12-07 17:24:33.697867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.321 [2024-12-07 17:24:33.697910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.321 [2024-12-07 17:24:33.697950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.321 [2024-12-07 17:24:33.697963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:00.321 { 00:08:00.321 "results": [ 00:08:00.321 { 00:08:00.321 "job": "raid_bdev1", 00:08:00.321 "core_mask": "0x1", 00:08:00.321 "workload": "randrw", 00:08:00.321 "percentage": 50, 00:08:00.321 "status": "finished", 00:08:00.321 "queue_depth": 1, 00:08:00.321 "io_size": 131072, 00:08:00.321 "runtime": 1.369331, 00:08:00.321 "iops": 15508.302959620427, 00:08:00.321 "mibps": 1938.5378699525534, 00:08:00.321 "io_failed": 1, 00:08:00.321 "io_timeout": 0, 00:08:00.321 "avg_latency_us": 89.17613401509642, 00:08:00.321 "min_latency_us": 26.494323144104804, 00:08:00.321 "max_latency_us": 1616.9362445414847 00:08:00.321 } 00:08:00.321 ], 00:08:00.321 "core_count": 1 00:08:00.321 } 00:08:00.321 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.321 17:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62588 00:08:00.580 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62588 ']' 00:08:00.580 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62588 00:08:00.580 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:00.580 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.580 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62588 00:08:00.580 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.580 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.580 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62588' 00:08:00.580 killing process with pid 62588 00:08:00.580 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62588 00:08:00.580 [2024-12-07 17:24:33.745023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.580 17:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62588 00:08:00.580 [2024-12-07 17:24:33.878783] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.010 17:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7BNzRdungs 00:08:02.010 17:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:02.010 17:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:02.010 17:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:02.010 17:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:02.010 17:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.010 17:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.010 ************************************ 00:08:02.010 END TEST raid_write_error_test 00:08:02.010 ************************************ 00:08:02.010 17:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:02.010 00:08:02.010 real 0m4.363s 00:08:02.010 user 0m5.245s 00:08:02.010 sys 0m0.520s 00:08:02.010 17:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.010 17:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.010 17:24:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:02.010 17:24:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:02.010 17:24:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.010 17:24:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.010 17:24:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.010 ************************************ 00:08:02.010 START TEST raid_state_function_test 00:08:02.010 ************************************ 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62732 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62732' 00:08:02.010 Process raid pid: 62732 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62732 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62732 ']' 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.010 17:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.010 [2024-12-07 17:24:35.232900] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:02.010 [2024-12-07 17:24:35.233118] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.269 [2024-12-07 17:24:35.404902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.269 [2024-12-07 17:24:35.521295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.528 [2024-12-07 17:24:35.728251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.528 [2024-12-07 17:24:35.728369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.787 [2024-12-07 17:24:36.067306] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.787 [2024-12-07 17:24:36.067412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.787 [2024-12-07 17:24:36.067445] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.787 [2024-12-07 17:24:36.067470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.787 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.788 "name": "Existed_Raid", 00:08:02.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.788 "strip_size_kb": 0, 00:08:02.788 "state": "configuring", 00:08:02.788 "raid_level": "raid1", 00:08:02.788 "superblock": false, 00:08:02.788 "num_base_bdevs": 2, 00:08:02.788 "num_base_bdevs_discovered": 0, 00:08:02.788 "num_base_bdevs_operational": 2, 00:08:02.788 "base_bdevs_list": [ 00:08:02.788 { 00:08:02.788 "name": "BaseBdev1", 00:08:02.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.788 "is_configured": false, 00:08:02.788 "data_offset": 0, 00:08:02.788 "data_size": 0 00:08:02.788 }, 00:08:02.788 { 00:08:02.788 "name": "BaseBdev2", 00:08:02.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.788 "is_configured": false, 00:08:02.788 "data_offset": 0, 00:08:02.788 "data_size": 0 00:08:02.788 } 00:08:02.788 ] 00:08:02.788 }' 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.788 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.358 [2024-12-07 17:24:36.506548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.358 [2024-12-07 17:24:36.506645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.358 [2024-12-07 17:24:36.518512] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.358 [2024-12-07 17:24:36.518591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.358 [2024-12-07 17:24:36.518622] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.358 [2024-12-07 17:24:36.518648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.358 [2024-12-07 17:24:36.568381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.358 BaseBdev1 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.358 [ 00:08:03.358 { 00:08:03.358 "name": "BaseBdev1", 00:08:03.358 "aliases": [ 00:08:03.358 "cad1685a-c045-4f83-a156-3e8e1fd79037" 00:08:03.358 ], 00:08:03.358 "product_name": "Malloc disk", 00:08:03.358 "block_size": 512, 00:08:03.358 "num_blocks": 65536, 00:08:03.358 "uuid": "cad1685a-c045-4f83-a156-3e8e1fd79037", 00:08:03.358 "assigned_rate_limits": { 00:08:03.358 "rw_ios_per_sec": 0, 00:08:03.358 "rw_mbytes_per_sec": 0, 00:08:03.358 "r_mbytes_per_sec": 0, 00:08:03.358 "w_mbytes_per_sec": 0 00:08:03.358 }, 00:08:03.358 "claimed": true, 00:08:03.358 "claim_type": "exclusive_write", 00:08:03.358 "zoned": false, 00:08:03.358 "supported_io_types": { 00:08:03.358 "read": true, 00:08:03.358 "write": true, 00:08:03.358 "unmap": true, 00:08:03.358 "flush": true, 00:08:03.358 "reset": true, 00:08:03.358 "nvme_admin": false, 00:08:03.358 "nvme_io": false, 00:08:03.358 "nvme_io_md": false, 00:08:03.358 "write_zeroes": true, 00:08:03.358 "zcopy": true, 00:08:03.358 "get_zone_info": false, 00:08:03.358 "zone_management": false, 00:08:03.358 "zone_append": false, 00:08:03.358 "compare": false, 00:08:03.358 "compare_and_write": false, 00:08:03.358 "abort": true, 00:08:03.358 "seek_hole": false, 00:08:03.358 "seek_data": false, 00:08:03.358 "copy": true, 00:08:03.358 "nvme_iov_md": false 00:08:03.358 }, 00:08:03.358 "memory_domains": [ 00:08:03.358 { 00:08:03.358 "dma_device_id": "system", 00:08:03.358 "dma_device_type": 1 00:08:03.358 }, 00:08:03.358 { 00:08:03.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.358 "dma_device_type": 2 00:08:03.358 } 00:08:03.358 ], 00:08:03.358 "driver_specific": {} 00:08:03.358 } 00:08:03.358 ] 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.358 "name": "Existed_Raid", 00:08:03.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.358 "strip_size_kb": 0, 00:08:03.358 "state": "configuring", 00:08:03.358 "raid_level": "raid1", 00:08:03.358 "superblock": false, 00:08:03.358 "num_base_bdevs": 2, 00:08:03.358 "num_base_bdevs_discovered": 1, 00:08:03.358 "num_base_bdevs_operational": 2, 00:08:03.358 "base_bdevs_list": [ 00:08:03.358 { 00:08:03.358 "name": "BaseBdev1", 00:08:03.358 "uuid": "cad1685a-c045-4f83-a156-3e8e1fd79037", 00:08:03.358 "is_configured": true, 00:08:03.358 "data_offset": 0, 00:08:03.358 "data_size": 65536 00:08:03.358 }, 00:08:03.358 { 00:08:03.358 "name": "BaseBdev2", 00:08:03.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.358 "is_configured": false, 00:08:03.358 "data_offset": 0, 00:08:03.358 "data_size": 0 00:08:03.358 } 00:08:03.358 ] 00:08:03.358 }' 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.358 17:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.929 [2024-12-07 17:24:37.019660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.929 [2024-12-07 17:24:37.019715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.929 [2024-12-07 17:24:37.031664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.929 [2024-12-07 17:24:37.033551] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.929 [2024-12-07 17:24:37.033596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.929 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.930 "name": "Existed_Raid", 00:08:03.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.930 "strip_size_kb": 0, 00:08:03.930 "state": "configuring", 00:08:03.930 "raid_level": "raid1", 00:08:03.930 "superblock": false, 00:08:03.930 "num_base_bdevs": 2, 00:08:03.930 "num_base_bdevs_discovered": 1, 00:08:03.930 "num_base_bdevs_operational": 2, 00:08:03.930 "base_bdevs_list": [ 00:08:03.930 { 00:08:03.930 "name": "BaseBdev1", 00:08:03.930 "uuid": "cad1685a-c045-4f83-a156-3e8e1fd79037", 00:08:03.930 "is_configured": true, 00:08:03.930 "data_offset": 0, 00:08:03.930 "data_size": 65536 00:08:03.930 }, 00:08:03.930 { 00:08:03.930 "name": "BaseBdev2", 00:08:03.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.930 "is_configured": false, 00:08:03.930 "data_offset": 0, 00:08:03.930 "data_size": 0 00:08:03.930 } 00:08:03.930 ] 00:08:03.930 }' 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.930 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.191 [2024-12-07 17:24:37.466178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.191 [2024-12-07 17:24:37.466320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.191 [2024-12-07 17:24:37.466347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:04.191 [2024-12-07 17:24:37.466634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:04.191 [2024-12-07 17:24:37.466867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.191 [2024-12-07 17:24:37.466915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:04.191 [2024-12-07 17:24:37.467252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.191 BaseBdev2 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.191 [ 00:08:04.191 { 00:08:04.191 "name": "BaseBdev2", 00:08:04.191 "aliases": [ 00:08:04.191 "7bd36bd6-601c-405b-a5a6-06a964269a1f" 00:08:04.191 ], 00:08:04.191 "product_name": "Malloc disk", 00:08:04.191 "block_size": 512, 00:08:04.191 "num_blocks": 65536, 00:08:04.191 "uuid": "7bd36bd6-601c-405b-a5a6-06a964269a1f", 00:08:04.191 "assigned_rate_limits": { 00:08:04.191 "rw_ios_per_sec": 0, 00:08:04.191 "rw_mbytes_per_sec": 0, 00:08:04.191 "r_mbytes_per_sec": 0, 00:08:04.191 "w_mbytes_per_sec": 0 00:08:04.191 }, 00:08:04.191 "claimed": true, 00:08:04.191 "claim_type": "exclusive_write", 00:08:04.191 "zoned": false, 00:08:04.191 "supported_io_types": { 00:08:04.191 "read": true, 00:08:04.191 "write": true, 00:08:04.191 "unmap": true, 00:08:04.191 "flush": true, 00:08:04.191 "reset": true, 00:08:04.191 "nvme_admin": false, 00:08:04.191 "nvme_io": false, 00:08:04.191 "nvme_io_md": false, 00:08:04.191 "write_zeroes": true, 00:08:04.191 "zcopy": true, 00:08:04.191 "get_zone_info": false, 00:08:04.191 "zone_management": false, 00:08:04.191 "zone_append": false, 00:08:04.191 "compare": false, 00:08:04.191 "compare_and_write": false, 00:08:04.191 "abort": true, 00:08:04.191 "seek_hole": false, 00:08:04.191 "seek_data": false, 00:08:04.191 "copy": true, 00:08:04.191 "nvme_iov_md": false 00:08:04.191 }, 00:08:04.191 "memory_domains": [ 00:08:04.191 { 00:08:04.191 "dma_device_id": "system", 00:08:04.191 "dma_device_type": 1 00:08:04.191 }, 00:08:04.191 { 00:08:04.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.191 "dma_device_type": 2 00:08:04.191 } 00:08:04.191 ], 00:08:04.191 "driver_specific": {} 00:08:04.191 } 00:08:04.191 ] 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.191 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.192 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.192 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.192 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.192 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.192 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.192 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.192 "name": "Existed_Raid", 00:08:04.192 "uuid": "cf1c9f50-412e-4b70-b14d-18760ceee8d1", 00:08:04.192 "strip_size_kb": 0, 00:08:04.192 "state": "online", 00:08:04.192 "raid_level": "raid1", 00:08:04.192 "superblock": false, 00:08:04.192 "num_base_bdevs": 2, 00:08:04.192 "num_base_bdevs_discovered": 2, 00:08:04.192 "num_base_bdevs_operational": 2, 00:08:04.192 "base_bdevs_list": [ 00:08:04.192 { 00:08:04.192 "name": "BaseBdev1", 00:08:04.192 "uuid": "cad1685a-c045-4f83-a156-3e8e1fd79037", 00:08:04.192 "is_configured": true, 00:08:04.192 "data_offset": 0, 00:08:04.192 "data_size": 65536 00:08:04.192 }, 00:08:04.192 { 00:08:04.192 "name": "BaseBdev2", 00:08:04.192 "uuid": "7bd36bd6-601c-405b-a5a6-06a964269a1f", 00:08:04.192 "is_configured": true, 00:08:04.192 "data_offset": 0, 00:08:04.192 "data_size": 65536 00:08:04.192 } 00:08:04.192 ] 00:08:04.192 }' 00:08:04.192 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.192 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.762 [2024-12-07 17:24:37.929719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.762 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.762 "name": "Existed_Raid", 00:08:04.762 "aliases": [ 00:08:04.762 "cf1c9f50-412e-4b70-b14d-18760ceee8d1" 00:08:04.762 ], 00:08:04.762 "product_name": "Raid Volume", 00:08:04.762 "block_size": 512, 00:08:04.762 "num_blocks": 65536, 00:08:04.762 "uuid": "cf1c9f50-412e-4b70-b14d-18760ceee8d1", 00:08:04.762 "assigned_rate_limits": { 00:08:04.762 "rw_ios_per_sec": 0, 00:08:04.762 "rw_mbytes_per_sec": 0, 00:08:04.762 "r_mbytes_per_sec": 0, 00:08:04.762 "w_mbytes_per_sec": 0 00:08:04.762 }, 00:08:04.762 "claimed": false, 00:08:04.762 "zoned": false, 00:08:04.762 "supported_io_types": { 00:08:04.762 "read": true, 00:08:04.762 "write": true, 00:08:04.762 "unmap": false, 00:08:04.762 "flush": false, 00:08:04.762 "reset": true, 00:08:04.762 "nvme_admin": false, 00:08:04.762 "nvme_io": false, 00:08:04.762 "nvme_io_md": false, 00:08:04.762 "write_zeroes": true, 00:08:04.762 "zcopy": false, 00:08:04.762 "get_zone_info": false, 00:08:04.762 "zone_management": false, 00:08:04.762 "zone_append": false, 00:08:04.762 "compare": false, 00:08:04.762 "compare_and_write": false, 00:08:04.762 "abort": false, 00:08:04.762 "seek_hole": false, 00:08:04.762 "seek_data": false, 00:08:04.762 "copy": false, 00:08:04.762 "nvme_iov_md": false 00:08:04.762 }, 00:08:04.762 "memory_domains": [ 00:08:04.762 { 00:08:04.762 "dma_device_id": "system", 00:08:04.762 "dma_device_type": 1 00:08:04.762 }, 00:08:04.762 { 00:08:04.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.762 "dma_device_type": 2 00:08:04.762 }, 00:08:04.762 { 00:08:04.762 "dma_device_id": "system", 00:08:04.762 "dma_device_type": 1 00:08:04.762 }, 00:08:04.762 { 00:08:04.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.762 "dma_device_type": 2 00:08:04.762 } 00:08:04.762 ], 00:08:04.762 "driver_specific": { 00:08:04.762 "raid": { 00:08:04.762 "uuid": "cf1c9f50-412e-4b70-b14d-18760ceee8d1", 00:08:04.762 "strip_size_kb": 0, 00:08:04.762 "state": "online", 00:08:04.762 "raid_level": "raid1", 00:08:04.762 "superblock": false, 00:08:04.762 "num_base_bdevs": 2, 00:08:04.762 "num_base_bdevs_discovered": 2, 00:08:04.762 "num_base_bdevs_operational": 2, 00:08:04.762 "base_bdevs_list": [ 00:08:04.762 { 00:08:04.762 "name": "BaseBdev1", 00:08:04.762 "uuid": "cad1685a-c045-4f83-a156-3e8e1fd79037", 00:08:04.762 "is_configured": true, 00:08:04.762 "data_offset": 0, 00:08:04.762 "data_size": 65536 00:08:04.762 }, 00:08:04.762 { 00:08:04.762 "name": "BaseBdev2", 00:08:04.762 "uuid": "7bd36bd6-601c-405b-a5a6-06a964269a1f", 00:08:04.762 "is_configured": true, 00:08:04.762 "data_offset": 0, 00:08:04.762 "data_size": 65536 00:08:04.762 } 00:08:04.762 ] 00:08:04.762 } 00:08:04.762 } 00:08:04.762 }' 00:08:04.763 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.763 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:04.763 BaseBdev2' 00:08:04.763 17:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.763 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.023 [2024-12-07 17:24:38.149133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.023 "name": "Existed_Raid", 00:08:05.023 "uuid": "cf1c9f50-412e-4b70-b14d-18760ceee8d1", 00:08:05.023 "strip_size_kb": 0, 00:08:05.023 "state": "online", 00:08:05.023 "raid_level": "raid1", 00:08:05.023 "superblock": false, 00:08:05.023 "num_base_bdevs": 2, 00:08:05.023 "num_base_bdevs_discovered": 1, 00:08:05.023 "num_base_bdevs_operational": 1, 00:08:05.023 "base_bdevs_list": [ 00:08:05.023 { 00:08:05.023 "name": null, 00:08:05.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.023 "is_configured": false, 00:08:05.023 "data_offset": 0, 00:08:05.023 "data_size": 65536 00:08:05.023 }, 00:08:05.023 { 00:08:05.023 "name": "BaseBdev2", 00:08:05.023 "uuid": "7bd36bd6-601c-405b-a5a6-06a964269a1f", 00:08:05.023 "is_configured": true, 00:08:05.023 "data_offset": 0, 00:08:05.023 "data_size": 65536 00:08:05.023 } 00:08:05.023 ] 00:08:05.023 }' 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.023 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.283 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:05.283 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:05.283 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:05.283 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.283 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.283 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.542 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.543 [2024-12-07 17:24:38.687221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:05.543 [2024-12-07 17:24:38.687424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.543 [2024-12-07 17:24:38.807551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.543 [2024-12-07 17:24:38.807737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.543 [2024-12-07 17:24:38.807763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62732 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62732 ']' 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62732 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62732 00:08:05.543 killing process with pid 62732 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62732' 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62732 00:08:05.543 17:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62732 00:08:05.543 [2024-12-07 17:24:38.901849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.543 [2024-12-07 17:24:38.921483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.922 ************************************ 00:08:06.922 END TEST raid_state_function_test 00:08:06.922 ************************************ 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.922 00:08:06.922 real 0m4.926s 00:08:06.922 user 0m7.000s 00:08:06.922 sys 0m0.794s 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.922 17:24:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:06.922 17:24:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.922 17:24:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.922 17:24:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.922 ************************************ 00:08:06.922 START TEST raid_state_function_test_sb 00:08:06.922 ************************************ 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:06.922 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62985 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62985' 00:08:06.923 Process raid pid: 62985 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62985 00:08:06.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62985 ']' 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.923 17:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.923 [2024-12-07 17:24:40.212217] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:06.923 [2024-12-07 17:24:40.212420] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.182 [2024-12-07 17:24:40.369452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.182 [2024-12-07 17:24:40.479085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.441 [2024-12-07 17:24:40.681695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.441 [2024-12-07 17:24:40.681742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.700 [2024-12-07 17:24:41.039972] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.700 [2024-12-07 17:24:41.040028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.700 [2024-12-07 17:24:41.040039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.700 [2024-12-07 17:24:41.040066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.700 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.959 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.959 "name": "Existed_Raid", 00:08:07.959 "uuid": "34259828-be97-43cf-9a0e-77b0998a0c99", 00:08:07.959 "strip_size_kb": 0, 00:08:07.959 "state": "configuring", 00:08:07.959 "raid_level": "raid1", 00:08:07.959 "superblock": true, 00:08:07.959 "num_base_bdevs": 2, 00:08:07.959 "num_base_bdevs_discovered": 0, 00:08:07.959 "num_base_bdevs_operational": 2, 00:08:07.959 "base_bdevs_list": [ 00:08:07.959 { 00:08:07.959 "name": "BaseBdev1", 00:08:07.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.959 "is_configured": false, 00:08:07.959 "data_offset": 0, 00:08:07.959 "data_size": 0 00:08:07.959 }, 00:08:07.959 { 00:08:07.959 "name": "BaseBdev2", 00:08:07.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.959 "is_configured": false, 00:08:07.959 "data_offset": 0, 00:08:07.959 "data_size": 0 00:08:07.959 } 00:08:07.959 ] 00:08:07.959 }' 00:08:07.959 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.959 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.218 [2024-12-07 17:24:41.435206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.218 [2024-12-07 17:24:41.435296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.218 [2024-12-07 17:24:41.443179] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.218 [2024-12-07 17:24:41.443221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.218 [2024-12-07 17:24:41.443231] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.218 [2024-12-07 17:24:41.443242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.218 BaseBdev1 00:08:08.218 [2024-12-07 17:24:41.487925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.218 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.218 [ 00:08:08.218 { 00:08:08.218 "name": "BaseBdev1", 00:08:08.218 "aliases": [ 00:08:08.218 "1bb9bd3f-178a-4cd6-a731-86ecf9e4bb1a" 00:08:08.218 ], 00:08:08.218 "product_name": "Malloc disk", 00:08:08.218 "block_size": 512, 00:08:08.218 "num_blocks": 65536, 00:08:08.218 "uuid": "1bb9bd3f-178a-4cd6-a731-86ecf9e4bb1a", 00:08:08.218 "assigned_rate_limits": { 00:08:08.218 "rw_ios_per_sec": 0, 00:08:08.218 "rw_mbytes_per_sec": 0, 00:08:08.218 "r_mbytes_per_sec": 0, 00:08:08.218 "w_mbytes_per_sec": 0 00:08:08.218 }, 00:08:08.218 "claimed": true, 00:08:08.218 "claim_type": "exclusive_write", 00:08:08.218 "zoned": false, 00:08:08.218 "supported_io_types": { 00:08:08.218 "read": true, 00:08:08.218 "write": true, 00:08:08.218 "unmap": true, 00:08:08.218 "flush": true, 00:08:08.218 "reset": true, 00:08:08.218 "nvme_admin": false, 00:08:08.218 "nvme_io": false, 00:08:08.218 "nvme_io_md": false, 00:08:08.219 "write_zeroes": true, 00:08:08.219 "zcopy": true, 00:08:08.219 "get_zone_info": false, 00:08:08.219 "zone_management": false, 00:08:08.219 "zone_append": false, 00:08:08.219 "compare": false, 00:08:08.219 "compare_and_write": false, 00:08:08.219 "abort": true, 00:08:08.219 "seek_hole": false, 00:08:08.219 "seek_data": false, 00:08:08.219 "copy": true, 00:08:08.219 "nvme_iov_md": false 00:08:08.219 }, 00:08:08.219 "memory_domains": [ 00:08:08.219 { 00:08:08.219 "dma_device_id": "system", 00:08:08.219 "dma_device_type": 1 00:08:08.219 }, 00:08:08.219 { 00:08:08.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.219 "dma_device_type": 2 00:08:08.219 } 00:08:08.219 ], 00:08:08.219 "driver_specific": {} 00:08:08.219 } 00:08:08.219 ] 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.219 "name": "Existed_Raid", 00:08:08.219 "uuid": "69ea94c4-7551-48b9-83d1-511e211534ce", 00:08:08.219 "strip_size_kb": 0, 00:08:08.219 "state": "configuring", 00:08:08.219 "raid_level": "raid1", 00:08:08.219 "superblock": true, 00:08:08.219 "num_base_bdevs": 2, 00:08:08.219 "num_base_bdevs_discovered": 1, 00:08:08.219 "num_base_bdevs_operational": 2, 00:08:08.219 "base_bdevs_list": [ 00:08:08.219 { 00:08:08.219 "name": "BaseBdev1", 00:08:08.219 "uuid": "1bb9bd3f-178a-4cd6-a731-86ecf9e4bb1a", 00:08:08.219 "is_configured": true, 00:08:08.219 "data_offset": 2048, 00:08:08.219 "data_size": 63488 00:08:08.219 }, 00:08:08.219 { 00:08:08.219 "name": "BaseBdev2", 00:08:08.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.219 "is_configured": false, 00:08:08.219 "data_offset": 0, 00:08:08.219 "data_size": 0 00:08:08.219 } 00:08:08.219 ] 00:08:08.219 }' 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.219 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.784 [2024-12-07 17:24:41.927255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.784 [2024-12-07 17:24:41.927364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.784 [2024-12-07 17:24:41.935244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.784 [2024-12-07 17:24:41.937144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.784 [2024-12-07 17:24:41.937235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.784 "name": "Existed_Raid", 00:08:08.784 "uuid": "3aa481fc-2285-4f57-b50d-5ee72e6dd00a", 00:08:08.784 "strip_size_kb": 0, 00:08:08.784 "state": "configuring", 00:08:08.784 "raid_level": "raid1", 00:08:08.784 "superblock": true, 00:08:08.784 "num_base_bdevs": 2, 00:08:08.784 "num_base_bdevs_discovered": 1, 00:08:08.784 "num_base_bdevs_operational": 2, 00:08:08.784 "base_bdevs_list": [ 00:08:08.784 { 00:08:08.784 "name": "BaseBdev1", 00:08:08.784 "uuid": "1bb9bd3f-178a-4cd6-a731-86ecf9e4bb1a", 00:08:08.784 "is_configured": true, 00:08:08.784 "data_offset": 2048, 00:08:08.784 "data_size": 63488 00:08:08.784 }, 00:08:08.784 { 00:08:08.784 "name": "BaseBdev2", 00:08:08.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.784 "is_configured": false, 00:08:08.784 "data_offset": 0, 00:08:08.784 "data_size": 0 00:08:08.784 } 00:08:08.784 ] 00:08:08.784 }' 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.784 17:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.041 BaseBdev2 00:08:09.041 [2024-12-07 17:24:42.391386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.041 [2024-12-07 17:24:42.391638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.041 [2024-12-07 17:24:42.391653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:09.041 [2024-12-07 17:24:42.391896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:09.041 [2024-12-07 17:24:42.392093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.041 [2024-12-07 17:24:42.392111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:09.041 [2024-12-07 17:24:42.392277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.041 [ 00:08:09.041 { 00:08:09.041 "name": "BaseBdev2", 00:08:09.041 "aliases": [ 00:08:09.041 "4683192f-e9c2-4665-adf0-389ead80479e" 00:08:09.041 ], 00:08:09.041 "product_name": "Malloc disk", 00:08:09.041 "block_size": 512, 00:08:09.041 "num_blocks": 65536, 00:08:09.041 "uuid": "4683192f-e9c2-4665-adf0-389ead80479e", 00:08:09.041 "assigned_rate_limits": { 00:08:09.041 "rw_ios_per_sec": 0, 00:08:09.041 "rw_mbytes_per_sec": 0, 00:08:09.041 "r_mbytes_per_sec": 0, 00:08:09.041 "w_mbytes_per_sec": 0 00:08:09.041 }, 00:08:09.041 "claimed": true, 00:08:09.041 "claim_type": "exclusive_write", 00:08:09.041 "zoned": false, 00:08:09.041 "supported_io_types": { 00:08:09.041 "read": true, 00:08:09.041 "write": true, 00:08:09.041 "unmap": true, 00:08:09.041 "flush": true, 00:08:09.041 "reset": true, 00:08:09.041 "nvme_admin": false, 00:08:09.041 "nvme_io": false, 00:08:09.041 "nvme_io_md": false, 00:08:09.041 "write_zeroes": true, 00:08:09.041 "zcopy": true, 00:08:09.041 "get_zone_info": false, 00:08:09.041 "zone_management": false, 00:08:09.041 "zone_append": false, 00:08:09.041 "compare": false, 00:08:09.041 "compare_and_write": false, 00:08:09.041 "abort": true, 00:08:09.041 "seek_hole": false, 00:08:09.041 "seek_data": false, 00:08:09.041 "copy": true, 00:08:09.041 "nvme_iov_md": false 00:08:09.041 }, 00:08:09.041 "memory_domains": [ 00:08:09.041 { 00:08:09.041 "dma_device_id": "system", 00:08:09.041 "dma_device_type": 1 00:08:09.041 }, 00:08:09.041 { 00:08:09.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.041 "dma_device_type": 2 00:08:09.041 } 00:08:09.041 ], 00:08:09.041 "driver_specific": {} 00:08:09.041 } 00:08:09.041 ] 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.041 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.298 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.298 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.298 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.298 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.298 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.298 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.298 "name": "Existed_Raid", 00:08:09.298 "uuid": "3aa481fc-2285-4f57-b50d-5ee72e6dd00a", 00:08:09.298 "strip_size_kb": 0, 00:08:09.298 "state": "online", 00:08:09.298 "raid_level": "raid1", 00:08:09.298 "superblock": true, 00:08:09.298 "num_base_bdevs": 2, 00:08:09.298 "num_base_bdevs_discovered": 2, 00:08:09.298 "num_base_bdevs_operational": 2, 00:08:09.298 "base_bdevs_list": [ 00:08:09.298 { 00:08:09.298 "name": "BaseBdev1", 00:08:09.298 "uuid": "1bb9bd3f-178a-4cd6-a731-86ecf9e4bb1a", 00:08:09.298 "is_configured": true, 00:08:09.298 "data_offset": 2048, 00:08:09.298 "data_size": 63488 00:08:09.298 }, 00:08:09.298 { 00:08:09.298 "name": "BaseBdev2", 00:08:09.298 "uuid": "4683192f-e9c2-4665-adf0-389ead80479e", 00:08:09.298 "is_configured": true, 00:08:09.298 "data_offset": 2048, 00:08:09.298 "data_size": 63488 00:08:09.298 } 00:08:09.298 ] 00:08:09.298 }' 00:08:09.298 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.298 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.555 [2024-12-07 17:24:42.910861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.555 "name": "Existed_Raid", 00:08:09.555 "aliases": [ 00:08:09.555 "3aa481fc-2285-4f57-b50d-5ee72e6dd00a" 00:08:09.555 ], 00:08:09.555 "product_name": "Raid Volume", 00:08:09.555 "block_size": 512, 00:08:09.555 "num_blocks": 63488, 00:08:09.555 "uuid": "3aa481fc-2285-4f57-b50d-5ee72e6dd00a", 00:08:09.555 "assigned_rate_limits": { 00:08:09.555 "rw_ios_per_sec": 0, 00:08:09.555 "rw_mbytes_per_sec": 0, 00:08:09.555 "r_mbytes_per_sec": 0, 00:08:09.555 "w_mbytes_per_sec": 0 00:08:09.555 }, 00:08:09.555 "claimed": false, 00:08:09.555 "zoned": false, 00:08:09.555 "supported_io_types": { 00:08:09.555 "read": true, 00:08:09.555 "write": true, 00:08:09.555 "unmap": false, 00:08:09.555 "flush": false, 00:08:09.555 "reset": true, 00:08:09.555 "nvme_admin": false, 00:08:09.555 "nvme_io": false, 00:08:09.555 "nvme_io_md": false, 00:08:09.555 "write_zeroes": true, 00:08:09.555 "zcopy": false, 00:08:09.555 "get_zone_info": false, 00:08:09.555 "zone_management": false, 00:08:09.555 "zone_append": false, 00:08:09.555 "compare": false, 00:08:09.555 "compare_and_write": false, 00:08:09.555 "abort": false, 00:08:09.555 "seek_hole": false, 00:08:09.555 "seek_data": false, 00:08:09.555 "copy": false, 00:08:09.555 "nvme_iov_md": false 00:08:09.555 }, 00:08:09.555 "memory_domains": [ 00:08:09.555 { 00:08:09.555 "dma_device_id": "system", 00:08:09.555 "dma_device_type": 1 00:08:09.555 }, 00:08:09.555 { 00:08:09.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.555 "dma_device_type": 2 00:08:09.555 }, 00:08:09.555 { 00:08:09.555 "dma_device_id": "system", 00:08:09.555 "dma_device_type": 1 00:08:09.555 }, 00:08:09.555 { 00:08:09.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.555 "dma_device_type": 2 00:08:09.555 } 00:08:09.555 ], 00:08:09.555 "driver_specific": { 00:08:09.555 "raid": { 00:08:09.555 "uuid": "3aa481fc-2285-4f57-b50d-5ee72e6dd00a", 00:08:09.555 "strip_size_kb": 0, 00:08:09.555 "state": "online", 00:08:09.555 "raid_level": "raid1", 00:08:09.555 "superblock": true, 00:08:09.555 "num_base_bdevs": 2, 00:08:09.555 "num_base_bdevs_discovered": 2, 00:08:09.555 "num_base_bdevs_operational": 2, 00:08:09.555 "base_bdevs_list": [ 00:08:09.555 { 00:08:09.555 "name": "BaseBdev1", 00:08:09.555 "uuid": "1bb9bd3f-178a-4cd6-a731-86ecf9e4bb1a", 00:08:09.555 "is_configured": true, 00:08:09.555 "data_offset": 2048, 00:08:09.555 "data_size": 63488 00:08:09.555 }, 00:08:09.555 { 00:08:09.555 "name": "BaseBdev2", 00:08:09.555 "uuid": "4683192f-e9c2-4665-adf0-389ead80479e", 00:08:09.555 "is_configured": true, 00:08:09.555 "data_offset": 2048, 00:08:09.555 "data_size": 63488 00:08:09.555 } 00:08:09.555 ] 00:08:09.555 } 00:08:09.555 } 00:08:09.555 }' 00:08:09.555 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.842 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:09.842 BaseBdev2' 00:08:09.842 17:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.842 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.843 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.843 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.843 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.843 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.843 [2024-12-07 17:24:43.130267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.126 "name": "Existed_Raid", 00:08:10.126 "uuid": "3aa481fc-2285-4f57-b50d-5ee72e6dd00a", 00:08:10.126 "strip_size_kb": 0, 00:08:10.126 "state": "online", 00:08:10.126 "raid_level": "raid1", 00:08:10.126 "superblock": true, 00:08:10.126 "num_base_bdevs": 2, 00:08:10.126 "num_base_bdevs_discovered": 1, 00:08:10.126 "num_base_bdevs_operational": 1, 00:08:10.126 "base_bdevs_list": [ 00:08:10.126 { 00:08:10.126 "name": null, 00:08:10.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.126 "is_configured": false, 00:08:10.126 "data_offset": 0, 00:08:10.126 "data_size": 63488 00:08:10.126 }, 00:08:10.126 { 00:08:10.126 "name": "BaseBdev2", 00:08:10.126 "uuid": "4683192f-e9c2-4665-adf0-389ead80479e", 00:08:10.126 "is_configured": true, 00:08:10.126 "data_offset": 2048, 00:08:10.126 "data_size": 63488 00:08:10.126 } 00:08:10.126 ] 00:08:10.126 }' 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.126 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.402 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.402 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.402 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.402 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.402 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.402 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.403 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.403 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.403 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.403 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.403 [2024-12-07 17:24:43.760489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.403 [2024-12-07 17:24:43.760660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.662 [2024-12-07 17:24:43.857922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.662 [2024-12-07 17:24:43.858058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.662 [2024-12-07 17:24:43.858106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:10.662 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.662 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.662 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.662 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62985 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62985 ']' 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62985 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62985 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62985' 00:08:10.663 killing process with pid 62985 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62985 00:08:10.663 [2024-12-07 17:24:43.955513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.663 17:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62985 00:08:10.663 [2024-12-07 17:24:43.972138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.050 ************************************ 00:08:12.050 END TEST raid_state_function_test_sb 00:08:12.050 ************************************ 00:08:12.050 17:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.050 00:08:12.050 real 0m4.971s 00:08:12.050 user 0m7.214s 00:08:12.050 sys 0m0.759s 00:08:12.050 17:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.050 17:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.050 17:24:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:12.050 17:24:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:12.050 17:24:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.050 17:24:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.050 ************************************ 00:08:12.050 START TEST raid_superblock_test 00:08:12.050 ************************************ 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63231 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63231 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63231 ']' 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.050 17:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.050 [2024-12-07 17:24:45.250053] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:12.050 [2024-12-07 17:24:45.250272] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63231 ] 00:08:12.050 [2024-12-07 17:24:45.405868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.307 [2024-12-07 17:24:45.517397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.565 [2024-12-07 17:24:45.717478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.565 [2024-12-07 17:24:45.717518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.823 malloc1 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.823 [2024-12-07 17:24:46.130117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.823 [2024-12-07 17:24:46.130177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.823 [2024-12-07 17:24:46.130214] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:12.823 [2024-12-07 17:24:46.130226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.823 [2024-12-07 17:24:46.132572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.823 [2024-12-07 17:24:46.132613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.823 pt1 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.823 malloc2 00:08:12.823 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.824 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:12.824 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.824 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.824 [2024-12-07 17:24:46.189457] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:12.824 [2024-12-07 17:24:46.189560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.824 [2024-12-07 17:24:46.189605] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:12.824 [2024-12-07 17:24:46.189635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.824 [2024-12-07 17:24:46.191800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.824 [2024-12-07 17:24:46.191869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:12.824 pt2 00:08:12.824 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.824 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.824 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.824 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:12.824 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.824 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.824 [2024-12-07 17:24:46.201478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.824 [2024-12-07 17:24:46.203396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:12.824 [2024-12-07 17:24:46.203656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:13.081 [2024-12-07 17:24:46.203715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:13.081 [2024-12-07 17:24:46.204076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:13.081 [2024-12-07 17:24:46.204331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:13.081 [2024-12-07 17:24:46.204385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:13.081 [2024-12-07 17:24:46.204576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.081 "name": "raid_bdev1", 00:08:13.081 "uuid": "61b58e20-4a83-4b37-b62b-9482c5885d24", 00:08:13.081 "strip_size_kb": 0, 00:08:13.081 "state": "online", 00:08:13.081 "raid_level": "raid1", 00:08:13.081 "superblock": true, 00:08:13.081 "num_base_bdevs": 2, 00:08:13.081 "num_base_bdevs_discovered": 2, 00:08:13.081 "num_base_bdevs_operational": 2, 00:08:13.081 "base_bdevs_list": [ 00:08:13.081 { 00:08:13.081 "name": "pt1", 00:08:13.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.081 "is_configured": true, 00:08:13.081 "data_offset": 2048, 00:08:13.081 "data_size": 63488 00:08:13.081 }, 00:08:13.081 { 00:08:13.081 "name": "pt2", 00:08:13.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.081 "is_configured": true, 00:08:13.081 "data_offset": 2048, 00:08:13.081 "data_size": 63488 00:08:13.081 } 00:08:13.081 ] 00:08:13.081 }' 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.081 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.340 [2024-12-07 17:24:46.700918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.340 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.599 "name": "raid_bdev1", 00:08:13.599 "aliases": [ 00:08:13.599 "61b58e20-4a83-4b37-b62b-9482c5885d24" 00:08:13.599 ], 00:08:13.599 "product_name": "Raid Volume", 00:08:13.599 "block_size": 512, 00:08:13.599 "num_blocks": 63488, 00:08:13.599 "uuid": "61b58e20-4a83-4b37-b62b-9482c5885d24", 00:08:13.599 "assigned_rate_limits": { 00:08:13.599 "rw_ios_per_sec": 0, 00:08:13.599 "rw_mbytes_per_sec": 0, 00:08:13.599 "r_mbytes_per_sec": 0, 00:08:13.599 "w_mbytes_per_sec": 0 00:08:13.599 }, 00:08:13.599 "claimed": false, 00:08:13.599 "zoned": false, 00:08:13.599 "supported_io_types": { 00:08:13.599 "read": true, 00:08:13.599 "write": true, 00:08:13.599 "unmap": false, 00:08:13.599 "flush": false, 00:08:13.599 "reset": true, 00:08:13.599 "nvme_admin": false, 00:08:13.599 "nvme_io": false, 00:08:13.599 "nvme_io_md": false, 00:08:13.599 "write_zeroes": true, 00:08:13.599 "zcopy": false, 00:08:13.599 "get_zone_info": false, 00:08:13.599 "zone_management": false, 00:08:13.599 "zone_append": false, 00:08:13.599 "compare": false, 00:08:13.599 "compare_and_write": false, 00:08:13.599 "abort": false, 00:08:13.599 "seek_hole": false, 00:08:13.599 "seek_data": false, 00:08:13.599 "copy": false, 00:08:13.599 "nvme_iov_md": false 00:08:13.599 }, 00:08:13.599 "memory_domains": [ 00:08:13.599 { 00:08:13.599 "dma_device_id": "system", 00:08:13.599 "dma_device_type": 1 00:08:13.599 }, 00:08:13.599 { 00:08:13.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.599 "dma_device_type": 2 00:08:13.599 }, 00:08:13.599 { 00:08:13.599 "dma_device_id": "system", 00:08:13.599 "dma_device_type": 1 00:08:13.599 }, 00:08:13.599 { 00:08:13.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.599 "dma_device_type": 2 00:08:13.599 } 00:08:13.599 ], 00:08:13.599 "driver_specific": { 00:08:13.599 "raid": { 00:08:13.599 "uuid": "61b58e20-4a83-4b37-b62b-9482c5885d24", 00:08:13.599 "strip_size_kb": 0, 00:08:13.599 "state": "online", 00:08:13.599 "raid_level": "raid1", 00:08:13.599 "superblock": true, 00:08:13.599 "num_base_bdevs": 2, 00:08:13.599 "num_base_bdevs_discovered": 2, 00:08:13.599 "num_base_bdevs_operational": 2, 00:08:13.599 "base_bdevs_list": [ 00:08:13.599 { 00:08:13.599 "name": "pt1", 00:08:13.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.599 "is_configured": true, 00:08:13.599 "data_offset": 2048, 00:08:13.599 "data_size": 63488 00:08:13.599 }, 00:08:13.599 { 00:08:13.599 "name": "pt2", 00:08:13.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.599 "is_configured": true, 00:08:13.599 "data_offset": 2048, 00:08:13.599 "data_size": 63488 00:08:13.599 } 00:08:13.599 ] 00:08:13.599 } 00:08:13.599 } 00:08:13.599 }' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:13.599 pt2' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.599 [2024-12-07 17:24:46.920541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=61b58e20-4a83-4b37-b62b-9482c5885d24 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 61b58e20-4a83-4b37-b62b-9482c5885d24 ']' 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.599 [2024-12-07 17:24:46.968163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.599 [2024-12-07 17:24:46.968189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.599 [2024-12-07 17:24:46.968280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.599 [2024-12-07 17:24:46.968341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.599 [2024-12-07 17:24:46.968354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.599 17:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:13.859 17:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 [2024-12-07 17:24:47.107988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:13.859 [2024-12-07 17:24:47.109993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:13.859 [2024-12-07 17:24:47.110121] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:13.859 [2024-12-07 17:24:47.110184] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:13.859 [2024-12-07 17:24:47.110201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.859 [2024-12-07 17:24:47.110213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:13.859 request: 00:08:13.859 { 00:08:13.859 "name": "raid_bdev1", 00:08:13.859 "raid_level": "raid1", 00:08:13.859 "base_bdevs": [ 00:08:13.859 "malloc1", 00:08:13.859 "malloc2" 00:08:13.859 ], 00:08:13.859 "superblock": false, 00:08:13.859 "method": "bdev_raid_create", 00:08:13.859 "req_id": 1 00:08:13.859 } 00:08:13.859 Got JSON-RPC error response 00:08:13.859 response: 00:08:13.859 { 00:08:13.859 "code": -17, 00:08:13.859 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:13.859 } 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 [2024-12-07 17:24:47.171843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.859 [2024-12-07 17:24:47.171912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.859 [2024-12-07 17:24:47.171942] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:13.859 [2024-12-07 17:24:47.171956] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.859 [2024-12-07 17:24:47.174234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.859 [2024-12-07 17:24:47.174274] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.859 [2024-12-07 17:24:47.174361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:13.859 [2024-12-07 17:24:47.174425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.859 pt1 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.859 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.859 "name": "raid_bdev1", 00:08:13.859 "uuid": "61b58e20-4a83-4b37-b62b-9482c5885d24", 00:08:13.860 "strip_size_kb": 0, 00:08:13.860 "state": "configuring", 00:08:13.860 "raid_level": "raid1", 00:08:13.860 "superblock": true, 00:08:13.860 "num_base_bdevs": 2, 00:08:13.860 "num_base_bdevs_discovered": 1, 00:08:13.860 "num_base_bdevs_operational": 2, 00:08:13.860 "base_bdevs_list": [ 00:08:13.860 { 00:08:13.860 "name": "pt1", 00:08:13.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.860 "is_configured": true, 00:08:13.860 "data_offset": 2048, 00:08:13.860 "data_size": 63488 00:08:13.860 }, 00:08:13.860 { 00:08:13.860 "name": null, 00:08:13.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.860 "is_configured": false, 00:08:13.860 "data_offset": 2048, 00:08:13.860 "data_size": 63488 00:08:13.860 } 00:08:13.860 ] 00:08:13.860 }' 00:08:13.860 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.860 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.427 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:14.427 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:14.427 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.427 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.427 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.427 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.427 [2024-12-07 17:24:47.571202] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.427 [2024-12-07 17:24:47.571342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.427 [2024-12-07 17:24:47.571389] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:14.427 [2024-12-07 17:24:47.571428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.427 [2024-12-07 17:24:47.571972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.427 [2024-12-07 17:24:47.572036] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.427 [2024-12-07 17:24:47.572150] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.427 [2024-12-07 17:24:47.572212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.427 [2024-12-07 17:24:47.572375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:14.427 [2024-12-07 17:24:47.572419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.427 [2024-12-07 17:24:47.572687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:14.428 [2024-12-07 17:24:47.572875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:14.428 [2024-12-07 17:24:47.572913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:14.428 [2024-12-07 17:24:47.573110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.428 pt2 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.428 "name": "raid_bdev1", 00:08:14.428 "uuid": "61b58e20-4a83-4b37-b62b-9482c5885d24", 00:08:14.428 "strip_size_kb": 0, 00:08:14.428 "state": "online", 00:08:14.428 "raid_level": "raid1", 00:08:14.428 "superblock": true, 00:08:14.428 "num_base_bdevs": 2, 00:08:14.428 "num_base_bdevs_discovered": 2, 00:08:14.428 "num_base_bdevs_operational": 2, 00:08:14.428 "base_bdevs_list": [ 00:08:14.428 { 00:08:14.428 "name": "pt1", 00:08:14.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.428 "is_configured": true, 00:08:14.428 "data_offset": 2048, 00:08:14.428 "data_size": 63488 00:08:14.428 }, 00:08:14.428 { 00:08:14.428 "name": "pt2", 00:08:14.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.428 "is_configured": true, 00:08:14.428 "data_offset": 2048, 00:08:14.428 "data_size": 63488 00:08:14.428 } 00:08:14.428 ] 00:08:14.428 }' 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.428 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.687 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:14.687 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:14.687 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.687 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.687 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.687 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.687 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.687 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.687 17:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.687 17:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.687 [2024-12-07 17:24:48.002688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.687 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.687 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.687 "name": "raid_bdev1", 00:08:14.687 "aliases": [ 00:08:14.687 "61b58e20-4a83-4b37-b62b-9482c5885d24" 00:08:14.687 ], 00:08:14.687 "product_name": "Raid Volume", 00:08:14.687 "block_size": 512, 00:08:14.687 "num_blocks": 63488, 00:08:14.687 "uuid": "61b58e20-4a83-4b37-b62b-9482c5885d24", 00:08:14.687 "assigned_rate_limits": { 00:08:14.687 "rw_ios_per_sec": 0, 00:08:14.687 "rw_mbytes_per_sec": 0, 00:08:14.687 "r_mbytes_per_sec": 0, 00:08:14.687 "w_mbytes_per_sec": 0 00:08:14.687 }, 00:08:14.687 "claimed": false, 00:08:14.687 "zoned": false, 00:08:14.687 "supported_io_types": { 00:08:14.687 "read": true, 00:08:14.687 "write": true, 00:08:14.687 "unmap": false, 00:08:14.687 "flush": false, 00:08:14.687 "reset": true, 00:08:14.687 "nvme_admin": false, 00:08:14.687 "nvme_io": false, 00:08:14.687 "nvme_io_md": false, 00:08:14.687 "write_zeroes": true, 00:08:14.687 "zcopy": false, 00:08:14.687 "get_zone_info": false, 00:08:14.687 "zone_management": false, 00:08:14.687 "zone_append": false, 00:08:14.687 "compare": false, 00:08:14.687 "compare_and_write": false, 00:08:14.687 "abort": false, 00:08:14.687 "seek_hole": false, 00:08:14.687 "seek_data": false, 00:08:14.687 "copy": false, 00:08:14.687 "nvme_iov_md": false 00:08:14.687 }, 00:08:14.687 "memory_domains": [ 00:08:14.687 { 00:08:14.687 "dma_device_id": "system", 00:08:14.687 "dma_device_type": 1 00:08:14.687 }, 00:08:14.687 { 00:08:14.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.687 "dma_device_type": 2 00:08:14.687 }, 00:08:14.687 { 00:08:14.687 "dma_device_id": "system", 00:08:14.687 "dma_device_type": 1 00:08:14.687 }, 00:08:14.687 { 00:08:14.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.687 "dma_device_type": 2 00:08:14.687 } 00:08:14.687 ], 00:08:14.687 "driver_specific": { 00:08:14.687 "raid": { 00:08:14.687 "uuid": "61b58e20-4a83-4b37-b62b-9482c5885d24", 00:08:14.687 "strip_size_kb": 0, 00:08:14.687 "state": "online", 00:08:14.687 "raid_level": "raid1", 00:08:14.687 "superblock": true, 00:08:14.687 "num_base_bdevs": 2, 00:08:14.687 "num_base_bdevs_discovered": 2, 00:08:14.687 "num_base_bdevs_operational": 2, 00:08:14.687 "base_bdevs_list": [ 00:08:14.687 { 00:08:14.687 "name": "pt1", 00:08:14.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.687 "is_configured": true, 00:08:14.687 "data_offset": 2048, 00:08:14.687 "data_size": 63488 00:08:14.687 }, 00:08:14.687 { 00:08:14.687 "name": "pt2", 00:08:14.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.687 "is_configured": true, 00:08:14.688 "data_offset": 2048, 00:08:14.688 "data_size": 63488 00:08:14.688 } 00:08:14.688 ] 00:08:14.688 } 00:08:14.688 } 00:08:14.688 }' 00:08:14.688 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:14.947 pt2' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.947 [2024-12-07 17:24:48.234334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 61b58e20-4a83-4b37-b62b-9482c5885d24 '!=' 61b58e20-4a83-4b37-b62b-9482c5885d24 ']' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.947 [2024-12-07 17:24:48.286023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.947 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.206 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.206 "name": "raid_bdev1", 00:08:15.206 "uuid": "61b58e20-4a83-4b37-b62b-9482c5885d24", 00:08:15.206 "strip_size_kb": 0, 00:08:15.206 "state": "online", 00:08:15.206 "raid_level": "raid1", 00:08:15.206 "superblock": true, 00:08:15.206 "num_base_bdevs": 2, 00:08:15.206 "num_base_bdevs_discovered": 1, 00:08:15.206 "num_base_bdevs_operational": 1, 00:08:15.206 "base_bdevs_list": [ 00:08:15.206 { 00:08:15.206 "name": null, 00:08:15.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.206 "is_configured": false, 00:08:15.206 "data_offset": 0, 00:08:15.206 "data_size": 63488 00:08:15.206 }, 00:08:15.206 { 00:08:15.206 "name": "pt2", 00:08:15.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.206 "is_configured": true, 00:08:15.206 "data_offset": 2048, 00:08:15.206 "data_size": 63488 00:08:15.206 } 00:08:15.206 ] 00:08:15.206 }' 00:08:15.206 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.206 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.466 [2024-12-07 17:24:48.729284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.466 [2024-12-07 17:24:48.729315] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.466 [2024-12-07 17:24:48.729400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.466 [2024-12-07 17:24:48.729450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.466 [2024-12-07 17:24:48.729462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.466 [2024-12-07 17:24:48.805111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:15.466 [2024-12-07 17:24:48.805222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.466 [2024-12-07 17:24:48.805257] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:15.466 [2024-12-07 17:24:48.805313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.466 [2024-12-07 17:24:48.807673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.466 [2024-12-07 17:24:48.807749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:15.466 [2024-12-07 17:24:48.807881] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:15.466 [2024-12-07 17:24:48.807983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.466 [2024-12-07 17:24:48.808134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:15.466 [2024-12-07 17:24:48.808178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:15.466 [2024-12-07 17:24:48.808459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:15.466 [2024-12-07 17:24:48.808651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:15.466 [2024-12-07 17:24:48.808692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:15.466 [2024-12-07 17:24:48.808874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.466 pt2 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.466 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.725 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.725 "name": "raid_bdev1", 00:08:15.725 "uuid": "61b58e20-4a83-4b37-b62b-9482c5885d24", 00:08:15.725 "strip_size_kb": 0, 00:08:15.725 "state": "online", 00:08:15.725 "raid_level": "raid1", 00:08:15.725 "superblock": true, 00:08:15.725 "num_base_bdevs": 2, 00:08:15.725 "num_base_bdevs_discovered": 1, 00:08:15.725 "num_base_bdevs_operational": 1, 00:08:15.725 "base_bdevs_list": [ 00:08:15.725 { 00:08:15.725 "name": null, 00:08:15.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.725 "is_configured": false, 00:08:15.725 "data_offset": 2048, 00:08:15.725 "data_size": 63488 00:08:15.725 }, 00:08:15.725 { 00:08:15.725 "name": "pt2", 00:08:15.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.725 "is_configured": true, 00:08:15.725 "data_offset": 2048, 00:08:15.725 "data_size": 63488 00:08:15.725 } 00:08:15.725 ] 00:08:15.725 }' 00:08:15.725 17:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.725 17:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.985 [2024-12-07 17:24:49.244356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.985 [2024-12-07 17:24:49.244387] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.985 [2024-12-07 17:24:49.244476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.985 [2024-12-07 17:24:49.244535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.985 [2024-12-07 17:24:49.244546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.985 [2024-12-07 17:24:49.296264] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:15.985 [2024-12-07 17:24:49.296327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.985 [2024-12-07 17:24:49.296347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:15.985 [2024-12-07 17:24:49.296356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.985 [2024-12-07 17:24:49.298586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.985 [2024-12-07 17:24:49.298624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:15.985 [2024-12-07 17:24:49.298712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:15.985 [2024-12-07 17:24:49.298758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:15.985 [2024-12-07 17:24:49.298913] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:15.985 [2024-12-07 17:24:49.298925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.985 [2024-12-07 17:24:49.298954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:15.985 [2024-12-07 17:24:49.299021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.985 [2024-12-07 17:24:49.299117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:15.985 [2024-12-07 17:24:49.299126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:15.985 pt1 00:08:15.985 [2024-12-07 17:24:49.299397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:15.985 [2024-12-07 17:24:49.299581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:15.985 [2024-12-07 17:24:49.299595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:15.985 [2024-12-07 17:24:49.299776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.985 "name": "raid_bdev1", 00:08:15.985 "uuid": "61b58e20-4a83-4b37-b62b-9482c5885d24", 00:08:15.985 "strip_size_kb": 0, 00:08:15.985 "state": "online", 00:08:15.985 "raid_level": "raid1", 00:08:15.985 "superblock": true, 00:08:15.985 "num_base_bdevs": 2, 00:08:15.985 "num_base_bdevs_discovered": 1, 00:08:15.985 "num_base_bdevs_operational": 1, 00:08:15.985 "base_bdevs_list": [ 00:08:15.985 { 00:08:15.985 "name": null, 00:08:15.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.985 "is_configured": false, 00:08:15.985 "data_offset": 2048, 00:08:15.985 "data_size": 63488 00:08:15.985 }, 00:08:15.985 { 00:08:15.985 "name": "pt2", 00:08:15.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.985 "is_configured": true, 00:08:15.985 "data_offset": 2048, 00:08:15.985 "data_size": 63488 00:08:15.985 } 00:08:15.985 ] 00:08:15.985 }' 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.985 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.554 [2024-12-07 17:24:49.823602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 61b58e20-4a83-4b37-b62b-9482c5885d24 '!=' 61b58e20-4a83-4b37-b62b-9482c5885d24 ']' 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63231 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63231 ']' 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63231 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63231 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63231' 00:08:16.554 killing process with pid 63231 00:08:16.554 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63231 00:08:16.554 [2024-12-07 17:24:49.900323] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.555 [2024-12-07 17:24:49.900428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.555 [2024-12-07 17:24:49.900482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.555 [2024-12-07 17:24:49.900498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:16.555 17:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63231 00:08:16.814 [2024-12-07 17:24:50.108004] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.220 ************************************ 00:08:18.220 END TEST raid_superblock_test 00:08:18.220 ************************************ 00:08:18.220 17:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:18.220 00:08:18.220 real 0m6.084s 00:08:18.220 user 0m9.227s 00:08:18.220 sys 0m1.016s 00:08:18.220 17:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.220 17:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.220 17:24:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:18.220 17:24:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:18.220 17:24:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.220 17:24:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.220 ************************************ 00:08:18.220 START TEST raid_read_error_test 00:08:18.220 ************************************ 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jtB0X3cXG8 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63556 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63556 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63556 ']' 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.220 17:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.220 [2024-12-07 17:24:51.420401] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:18.220 [2024-12-07 17:24:51.420587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63556 ] 00:08:18.220 [2024-12-07 17:24:51.596095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.480 [2024-12-07 17:24:51.709261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.739 [2024-12-07 17:24:51.910788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.739 [2024-12-07 17:24:51.910938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.999 BaseBdev1_malloc 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.999 true 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.999 [2024-12-07 17:24:52.335348] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.999 [2024-12-07 17:24:52.335410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.999 [2024-12-07 17:24:52.335432] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.999 [2024-12-07 17:24:52.335444] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.999 [2024-12-07 17:24:52.337796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.999 [2024-12-07 17:24:52.337837] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.999 BaseBdev1 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.999 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.259 BaseBdev2_malloc 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.259 true 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.259 [2024-12-07 17:24:52.401638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:19.259 [2024-12-07 17:24:52.401693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.259 [2024-12-07 17:24:52.401710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:19.259 [2024-12-07 17:24:52.401721] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.259 [2024-12-07 17:24:52.403830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.259 [2024-12-07 17:24:52.403869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:19.259 BaseBdev2 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.259 [2024-12-07 17:24:52.413675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.259 [2024-12-07 17:24:52.415634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.259 [2024-12-07 17:24:52.415826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:19.259 [2024-12-07 17:24:52.415843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:19.259 [2024-12-07 17:24:52.416122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:19.259 [2024-12-07 17:24:52.416302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:19.259 [2024-12-07 17:24:52.416314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:19.259 [2024-12-07 17:24:52.416479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.259 "name": "raid_bdev1", 00:08:19.259 "uuid": "ba487e45-f39f-43ab-86aa-83547270614f", 00:08:19.259 "strip_size_kb": 0, 00:08:19.259 "state": "online", 00:08:19.259 "raid_level": "raid1", 00:08:19.259 "superblock": true, 00:08:19.259 "num_base_bdevs": 2, 00:08:19.259 "num_base_bdevs_discovered": 2, 00:08:19.259 "num_base_bdevs_operational": 2, 00:08:19.259 "base_bdevs_list": [ 00:08:19.259 { 00:08:19.259 "name": "BaseBdev1", 00:08:19.259 "uuid": "a84d4ec0-ee3a-59cf-a685-e0b2fcbfe962", 00:08:19.259 "is_configured": true, 00:08:19.259 "data_offset": 2048, 00:08:19.259 "data_size": 63488 00:08:19.259 }, 00:08:19.259 { 00:08:19.259 "name": "BaseBdev2", 00:08:19.259 "uuid": "0dec4fd7-7a9e-58f2-b20f-59127b54a0c7", 00:08:19.259 "is_configured": true, 00:08:19.259 "data_offset": 2048, 00:08:19.259 "data_size": 63488 00:08:19.259 } 00:08:19.259 ] 00:08:19.259 }' 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.259 17:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.519 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:19.519 17:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:19.779 [2024-12-07 17:24:52.969963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:20.720 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:20.720 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.720 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.720 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.720 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:20.720 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.721 "name": "raid_bdev1", 00:08:20.721 "uuid": "ba487e45-f39f-43ab-86aa-83547270614f", 00:08:20.721 "strip_size_kb": 0, 00:08:20.721 "state": "online", 00:08:20.721 "raid_level": "raid1", 00:08:20.721 "superblock": true, 00:08:20.721 "num_base_bdevs": 2, 00:08:20.721 "num_base_bdevs_discovered": 2, 00:08:20.721 "num_base_bdevs_operational": 2, 00:08:20.721 "base_bdevs_list": [ 00:08:20.721 { 00:08:20.721 "name": "BaseBdev1", 00:08:20.721 "uuid": "a84d4ec0-ee3a-59cf-a685-e0b2fcbfe962", 00:08:20.721 "is_configured": true, 00:08:20.721 "data_offset": 2048, 00:08:20.721 "data_size": 63488 00:08:20.721 }, 00:08:20.721 { 00:08:20.721 "name": "BaseBdev2", 00:08:20.721 "uuid": "0dec4fd7-7a9e-58f2-b20f-59127b54a0c7", 00:08:20.721 "is_configured": true, 00:08:20.721 "data_offset": 2048, 00:08:20.721 "data_size": 63488 00:08:20.721 } 00:08:20.721 ] 00:08:20.721 }' 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.721 17:24:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.290 [2024-12-07 17:24:54.377958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.290 [2024-12-07 17:24:54.378004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.290 [2024-12-07 17:24:54.380928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.290 [2024-12-07 17:24:54.381063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.290 [2024-12-07 17:24:54.381164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.290 [2024-12-07 17:24:54.381179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:21.290 { 00:08:21.290 "results": [ 00:08:21.290 { 00:08:21.290 "job": "raid_bdev1", 00:08:21.290 "core_mask": "0x1", 00:08:21.290 "workload": "randrw", 00:08:21.290 "percentage": 50, 00:08:21.290 "status": "finished", 00:08:21.290 "queue_depth": 1, 00:08:21.290 "io_size": 131072, 00:08:21.290 "runtime": 1.408929, 00:08:21.290 "iops": 17573.632170251305, 00:08:21.290 "mibps": 2196.704021281413, 00:08:21.290 "io_failed": 0, 00:08:21.290 "io_timeout": 0, 00:08:21.290 "avg_latency_us": 54.21515982250566, 00:08:21.290 "min_latency_us": 23.923144104803495, 00:08:21.290 "max_latency_us": 1445.2262008733624 00:08:21.290 } 00:08:21.290 ], 00:08:21.290 "core_count": 1 00:08:21.290 } 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63556 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63556 ']' 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63556 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63556 00:08:21.290 killing process with pid 63556 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63556' 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63556 00:08:21.290 [2024-12-07 17:24:54.418275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.290 17:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63556 00:08:21.290 [2024-12-07 17:24:54.556223] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.668 17:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jtB0X3cXG8 00:08:22.668 17:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:22.668 17:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:22.668 17:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:22.668 17:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:22.668 ************************************ 00:08:22.668 END TEST raid_read_error_test 00:08:22.668 ************************************ 00:08:22.668 17:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.668 17:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:22.668 17:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:22.668 00:08:22.668 real 0m4.438s 00:08:22.668 user 0m5.361s 00:08:22.668 sys 0m0.550s 00:08:22.668 17:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.668 17:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.668 17:24:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:22.668 17:24:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:22.668 17:24:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.668 17:24:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.668 ************************************ 00:08:22.668 START TEST raid_write_error_test 00:08:22.668 ************************************ 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FWXab5JGzI 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63707 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63707 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63707 ']' 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.668 17:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.668 [2024-12-07 17:24:55.930717] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:22.668 [2024-12-07 17:24:55.930910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63707 ] 00:08:22.927 [2024-12-07 17:24:56.102406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.928 [2024-12-07 17:24:56.211761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.187 [2024-12-07 17:24:56.407381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.187 [2024-12-07 17:24:56.407511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.445 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.445 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:23.445 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.445 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:23.445 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.445 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.445 BaseBdev1_malloc 00:08:23.445 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.445 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:23.445 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.445 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.445 true 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.705 [2024-12-07 17:24:56.828185] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:23.705 [2024-12-07 17:24:56.828242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.705 [2024-12-07 17:24:56.828264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:23.705 [2024-12-07 17:24:56.828275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.705 [2024-12-07 17:24:56.830446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.705 [2024-12-07 17:24:56.830485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:23.705 BaseBdev1 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.705 BaseBdev2_malloc 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.705 true 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.705 [2024-12-07 17:24:56.888430] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:23.705 [2024-12-07 17:24:56.888535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.705 [2024-12-07 17:24:56.888599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:23.705 [2024-12-07 17:24:56.888658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.705 [2024-12-07 17:24:56.890877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.705 [2024-12-07 17:24:56.890971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:23.705 BaseBdev2 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.705 [2024-12-07 17:24:56.900467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.705 [2024-12-07 17:24:56.902329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.705 [2024-12-07 17:24:56.902599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.705 [2024-12-07 17:24:56.902657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.705 [2024-12-07 17:24:56.902952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:23.705 [2024-12-07 17:24:56.903195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.705 [2024-12-07 17:24:56.903241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:23.705 [2024-12-07 17:24:56.903439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.705 "name": "raid_bdev1", 00:08:23.705 "uuid": "202bbe20-1ca5-475b-b044-cc32cb33caa8", 00:08:23.705 "strip_size_kb": 0, 00:08:23.705 "state": "online", 00:08:23.705 "raid_level": "raid1", 00:08:23.705 "superblock": true, 00:08:23.705 "num_base_bdevs": 2, 00:08:23.705 "num_base_bdevs_discovered": 2, 00:08:23.705 "num_base_bdevs_operational": 2, 00:08:23.705 "base_bdevs_list": [ 00:08:23.705 { 00:08:23.705 "name": "BaseBdev1", 00:08:23.705 "uuid": "69db2d41-55d2-5043-ae5e-8c20a5a55fd3", 00:08:23.705 "is_configured": true, 00:08:23.705 "data_offset": 2048, 00:08:23.705 "data_size": 63488 00:08:23.705 }, 00:08:23.705 { 00:08:23.705 "name": "BaseBdev2", 00:08:23.705 "uuid": "d1bfe606-f051-5245-b6af-99abfb90433d", 00:08:23.705 "is_configured": true, 00:08:23.705 "data_offset": 2048, 00:08:23.705 "data_size": 63488 00:08:23.705 } 00:08:23.705 ] 00:08:23.705 }' 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.705 17:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.273 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:24.273 17:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:24.273 [2024-12-07 17:24:57.440782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.207 [2024-12-07 17:24:58.360971] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:25.207 [2024-12-07 17:24:58.361030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.207 [2024-12-07 17:24:58.361227] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.207 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.207 "name": "raid_bdev1", 00:08:25.207 "uuid": "202bbe20-1ca5-475b-b044-cc32cb33caa8", 00:08:25.207 "strip_size_kb": 0, 00:08:25.207 "state": "online", 00:08:25.207 "raid_level": "raid1", 00:08:25.207 "superblock": true, 00:08:25.207 "num_base_bdevs": 2, 00:08:25.207 "num_base_bdevs_discovered": 1, 00:08:25.207 "num_base_bdevs_operational": 1, 00:08:25.207 "base_bdevs_list": [ 00:08:25.207 { 00:08:25.207 "name": null, 00:08:25.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.207 "is_configured": false, 00:08:25.207 "data_offset": 0, 00:08:25.207 "data_size": 63488 00:08:25.207 }, 00:08:25.207 { 00:08:25.207 "name": "BaseBdev2", 00:08:25.208 "uuid": "d1bfe606-f051-5245-b6af-99abfb90433d", 00:08:25.208 "is_configured": true, 00:08:25.208 "data_offset": 2048, 00:08:25.208 "data_size": 63488 00:08:25.208 } 00:08:25.208 ] 00:08:25.208 }' 00:08:25.208 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.208 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.591 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:25.591 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.591 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.591 [2024-12-07 17:24:58.822275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.591 [2024-12-07 17:24:58.822393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.591 [2024-12-07 17:24:58.825698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.591 [2024-12-07 17:24:58.825791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.591 [2024-12-07 17:24:58.825863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.591 [2024-12-07 17:24:58.825877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:25.591 { 00:08:25.591 "results": [ 00:08:25.591 { 00:08:25.591 "job": "raid_bdev1", 00:08:25.591 "core_mask": "0x1", 00:08:25.591 "workload": "randrw", 00:08:25.591 "percentage": 50, 00:08:25.591 "status": "finished", 00:08:25.591 "queue_depth": 1, 00:08:25.591 "io_size": 131072, 00:08:25.591 "runtime": 1.38251, 00:08:25.591 "iops": 20242.168230247884, 00:08:25.591 "mibps": 2530.2710287809855, 00:08:25.591 "io_failed": 0, 00:08:25.591 "io_timeout": 0, 00:08:25.591 "avg_latency_us": 46.68362018642239, 00:08:25.591 "min_latency_us": 23.252401746724892, 00:08:25.591 "max_latency_us": 1395.1441048034935 00:08:25.591 } 00:08:25.591 ], 00:08:25.591 "core_count": 1 00:08:25.591 } 00:08:25.591 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.591 17:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63707 00:08:25.591 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63707 ']' 00:08:25.591 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63707 00:08:25.591 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:25.591 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.591 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63707 00:08:25.591 killing process with pid 63707 00:08:25.592 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.592 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.592 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63707' 00:08:25.592 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63707 00:08:25.592 [2024-12-07 17:24:58.870376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.592 17:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63707 00:08:25.848 [2024-12-07 17:24:59.010693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.219 17:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:27.219 17:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:27.219 17:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FWXab5JGzI 00:08:27.219 17:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:27.219 17:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:27.220 ************************************ 00:08:27.220 END TEST raid_write_error_test 00:08:27.220 ************************************ 00:08:27.220 17:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.220 17:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:27.220 17:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:27.220 00:08:27.220 real 0m4.377s 00:08:27.220 user 0m5.256s 00:08:27.220 sys 0m0.536s 00:08:27.220 17:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.220 17:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.220 17:25:00 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:27.220 17:25:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:27.220 17:25:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:27.220 17:25:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:27.220 17:25:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.220 17:25:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.220 ************************************ 00:08:27.220 START TEST raid_state_function_test 00:08:27.220 ************************************ 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63845 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63845' 00:08:27.220 Process raid pid: 63845 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63845 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63845 ']' 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.220 17:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.220 [2024-12-07 17:25:00.365075] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:27.220 [2024-12-07 17:25:00.365642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.220 [2024-12-07 17:25:00.524959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.479 [2024-12-07 17:25:00.636823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.479 [2024-12-07 17:25:00.834720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.479 [2024-12-07 17:25:00.834765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.046 [2024-12-07 17:25:01.202580] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.046 [2024-12-07 17:25:01.202679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.046 [2024-12-07 17:25:01.202711] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.046 [2024-12-07 17:25:01.202735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.046 [2024-12-07 17:25:01.202755] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.046 [2024-12-07 17:25:01.202776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.046 "name": "Existed_Raid", 00:08:28.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.046 "strip_size_kb": 64, 00:08:28.046 "state": "configuring", 00:08:28.046 "raid_level": "raid0", 00:08:28.046 "superblock": false, 00:08:28.046 "num_base_bdevs": 3, 00:08:28.046 "num_base_bdevs_discovered": 0, 00:08:28.046 "num_base_bdevs_operational": 3, 00:08:28.046 "base_bdevs_list": [ 00:08:28.046 { 00:08:28.046 "name": "BaseBdev1", 00:08:28.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.046 "is_configured": false, 00:08:28.046 "data_offset": 0, 00:08:28.046 "data_size": 0 00:08:28.046 }, 00:08:28.046 { 00:08:28.046 "name": "BaseBdev2", 00:08:28.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.046 "is_configured": false, 00:08:28.046 "data_offset": 0, 00:08:28.046 "data_size": 0 00:08:28.046 }, 00:08:28.046 { 00:08:28.046 "name": "BaseBdev3", 00:08:28.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.046 "is_configured": false, 00:08:28.046 "data_offset": 0, 00:08:28.046 "data_size": 0 00:08:28.046 } 00:08:28.046 ] 00:08:28.046 }' 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.046 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.304 [2024-12-07 17:25:01.661746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.304 [2024-12-07 17:25:01.661826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.304 [2024-12-07 17:25:01.673709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.304 [2024-12-07 17:25:01.673787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.304 [2024-12-07 17:25:01.673815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.304 [2024-12-07 17:25:01.673837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.304 [2024-12-07 17:25:01.673855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.304 [2024-12-07 17:25:01.673875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.304 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.563 [2024-12-07 17:25:01.721134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.563 BaseBdev1 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.563 [ 00:08:28.563 { 00:08:28.563 "name": "BaseBdev1", 00:08:28.563 "aliases": [ 00:08:28.563 "659e0155-ab78-4ebd-9cd1-41cc0cbea333" 00:08:28.563 ], 00:08:28.563 "product_name": "Malloc disk", 00:08:28.563 "block_size": 512, 00:08:28.563 "num_blocks": 65536, 00:08:28.563 "uuid": "659e0155-ab78-4ebd-9cd1-41cc0cbea333", 00:08:28.563 "assigned_rate_limits": { 00:08:28.563 "rw_ios_per_sec": 0, 00:08:28.563 "rw_mbytes_per_sec": 0, 00:08:28.563 "r_mbytes_per_sec": 0, 00:08:28.563 "w_mbytes_per_sec": 0 00:08:28.563 }, 00:08:28.563 "claimed": true, 00:08:28.563 "claim_type": "exclusive_write", 00:08:28.563 "zoned": false, 00:08:28.563 "supported_io_types": { 00:08:28.563 "read": true, 00:08:28.563 "write": true, 00:08:28.563 "unmap": true, 00:08:28.563 "flush": true, 00:08:28.563 "reset": true, 00:08:28.563 "nvme_admin": false, 00:08:28.563 "nvme_io": false, 00:08:28.563 "nvme_io_md": false, 00:08:28.563 "write_zeroes": true, 00:08:28.563 "zcopy": true, 00:08:28.563 "get_zone_info": false, 00:08:28.563 "zone_management": false, 00:08:28.563 "zone_append": false, 00:08:28.563 "compare": false, 00:08:28.563 "compare_and_write": false, 00:08:28.563 "abort": true, 00:08:28.563 "seek_hole": false, 00:08:28.563 "seek_data": false, 00:08:28.563 "copy": true, 00:08:28.563 "nvme_iov_md": false 00:08:28.563 }, 00:08:28.563 "memory_domains": [ 00:08:28.563 { 00:08:28.563 "dma_device_id": "system", 00:08:28.563 "dma_device_type": 1 00:08:28.563 }, 00:08:28.563 { 00:08:28.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.563 "dma_device_type": 2 00:08:28.563 } 00:08:28.563 ], 00:08:28.563 "driver_specific": {} 00:08:28.563 } 00:08:28.563 ] 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.563 "name": "Existed_Raid", 00:08:28.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.563 "strip_size_kb": 64, 00:08:28.563 "state": "configuring", 00:08:28.563 "raid_level": "raid0", 00:08:28.563 "superblock": false, 00:08:28.563 "num_base_bdevs": 3, 00:08:28.563 "num_base_bdevs_discovered": 1, 00:08:28.563 "num_base_bdevs_operational": 3, 00:08:28.563 "base_bdevs_list": [ 00:08:28.563 { 00:08:28.563 "name": "BaseBdev1", 00:08:28.563 "uuid": "659e0155-ab78-4ebd-9cd1-41cc0cbea333", 00:08:28.563 "is_configured": true, 00:08:28.563 "data_offset": 0, 00:08:28.563 "data_size": 65536 00:08:28.563 }, 00:08:28.563 { 00:08:28.563 "name": "BaseBdev2", 00:08:28.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.563 "is_configured": false, 00:08:28.563 "data_offset": 0, 00:08:28.563 "data_size": 0 00:08:28.563 }, 00:08:28.563 { 00:08:28.563 "name": "BaseBdev3", 00:08:28.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.563 "is_configured": false, 00:08:28.563 "data_offset": 0, 00:08:28.563 "data_size": 0 00:08:28.563 } 00:08:28.563 ] 00:08:28.563 }' 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.563 17:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 [2024-12-07 17:25:02.212408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.129 [2024-12-07 17:25:02.212508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 [2024-12-07 17:25:02.224437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.129 [2024-12-07 17:25:02.226376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.129 [2024-12-07 17:25:02.226466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.129 [2024-12-07 17:25:02.226497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:29.129 [2024-12-07 17:25:02.226521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.129 "name": "Existed_Raid", 00:08:29.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.129 "strip_size_kb": 64, 00:08:29.129 "state": "configuring", 00:08:29.129 "raid_level": "raid0", 00:08:29.129 "superblock": false, 00:08:29.129 "num_base_bdevs": 3, 00:08:29.129 "num_base_bdevs_discovered": 1, 00:08:29.129 "num_base_bdevs_operational": 3, 00:08:29.129 "base_bdevs_list": [ 00:08:29.129 { 00:08:29.129 "name": "BaseBdev1", 00:08:29.129 "uuid": "659e0155-ab78-4ebd-9cd1-41cc0cbea333", 00:08:29.129 "is_configured": true, 00:08:29.129 "data_offset": 0, 00:08:29.129 "data_size": 65536 00:08:29.129 }, 00:08:29.129 { 00:08:29.129 "name": "BaseBdev2", 00:08:29.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.129 "is_configured": false, 00:08:29.129 "data_offset": 0, 00:08:29.129 "data_size": 0 00:08:29.129 }, 00:08:29.129 { 00:08:29.129 "name": "BaseBdev3", 00:08:29.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.129 "is_configured": false, 00:08:29.129 "data_offset": 0, 00:08:29.129 "data_size": 0 00:08:29.129 } 00:08:29.129 ] 00:08:29.129 }' 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.129 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.386 [2024-12-07 17:25:02.700725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.386 BaseBdev2 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.386 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.387 [ 00:08:29.387 { 00:08:29.387 "name": "BaseBdev2", 00:08:29.387 "aliases": [ 00:08:29.387 "0d160479-05e9-419f-abe5-5a8758ff51d4" 00:08:29.387 ], 00:08:29.387 "product_name": "Malloc disk", 00:08:29.387 "block_size": 512, 00:08:29.387 "num_blocks": 65536, 00:08:29.387 "uuid": "0d160479-05e9-419f-abe5-5a8758ff51d4", 00:08:29.387 "assigned_rate_limits": { 00:08:29.387 "rw_ios_per_sec": 0, 00:08:29.387 "rw_mbytes_per_sec": 0, 00:08:29.387 "r_mbytes_per_sec": 0, 00:08:29.387 "w_mbytes_per_sec": 0 00:08:29.387 }, 00:08:29.387 "claimed": true, 00:08:29.387 "claim_type": "exclusive_write", 00:08:29.387 "zoned": false, 00:08:29.387 "supported_io_types": { 00:08:29.387 "read": true, 00:08:29.387 "write": true, 00:08:29.387 "unmap": true, 00:08:29.387 "flush": true, 00:08:29.387 "reset": true, 00:08:29.387 "nvme_admin": false, 00:08:29.387 "nvme_io": false, 00:08:29.387 "nvme_io_md": false, 00:08:29.387 "write_zeroes": true, 00:08:29.387 "zcopy": true, 00:08:29.387 "get_zone_info": false, 00:08:29.387 "zone_management": false, 00:08:29.387 "zone_append": false, 00:08:29.387 "compare": false, 00:08:29.387 "compare_and_write": false, 00:08:29.387 "abort": true, 00:08:29.387 "seek_hole": false, 00:08:29.387 "seek_data": false, 00:08:29.387 "copy": true, 00:08:29.387 "nvme_iov_md": false 00:08:29.387 }, 00:08:29.387 "memory_domains": [ 00:08:29.387 { 00:08:29.387 "dma_device_id": "system", 00:08:29.387 "dma_device_type": 1 00:08:29.387 }, 00:08:29.387 { 00:08:29.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.387 "dma_device_type": 2 00:08:29.387 } 00:08:29.387 ], 00:08:29.387 "driver_specific": {} 00:08:29.387 } 00:08:29.387 ] 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.387 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.645 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.645 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.645 "name": "Existed_Raid", 00:08:29.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.645 "strip_size_kb": 64, 00:08:29.645 "state": "configuring", 00:08:29.645 "raid_level": "raid0", 00:08:29.645 "superblock": false, 00:08:29.645 "num_base_bdevs": 3, 00:08:29.645 "num_base_bdevs_discovered": 2, 00:08:29.645 "num_base_bdevs_operational": 3, 00:08:29.645 "base_bdevs_list": [ 00:08:29.645 { 00:08:29.645 "name": "BaseBdev1", 00:08:29.645 "uuid": "659e0155-ab78-4ebd-9cd1-41cc0cbea333", 00:08:29.645 "is_configured": true, 00:08:29.645 "data_offset": 0, 00:08:29.645 "data_size": 65536 00:08:29.645 }, 00:08:29.645 { 00:08:29.645 "name": "BaseBdev2", 00:08:29.645 "uuid": "0d160479-05e9-419f-abe5-5a8758ff51d4", 00:08:29.645 "is_configured": true, 00:08:29.645 "data_offset": 0, 00:08:29.645 "data_size": 65536 00:08:29.645 }, 00:08:29.645 { 00:08:29.645 "name": "BaseBdev3", 00:08:29.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.645 "is_configured": false, 00:08:29.645 "data_offset": 0, 00:08:29.645 "data_size": 0 00:08:29.645 } 00:08:29.645 ] 00:08:29.645 }' 00:08:29.645 17:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.645 17:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.903 [2024-12-07 17:25:03.177896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.903 [2024-12-07 17:25:03.178061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.903 [2024-12-07 17:25:03.178116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:29.903 [2024-12-07 17:25:03.178534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:29.903 BaseBdev3 00:08:29.903 [2024-12-07 17:25:03.178748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.903 [2024-12-07 17:25:03.178764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:29.903 [2024-12-07 17:25:03.179060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.903 [ 00:08:29.903 { 00:08:29.903 "name": "BaseBdev3", 00:08:29.903 "aliases": [ 00:08:29.903 "9f3bac4a-b19e-434a-a1de-001f678f798b" 00:08:29.903 ], 00:08:29.903 "product_name": "Malloc disk", 00:08:29.903 "block_size": 512, 00:08:29.903 "num_blocks": 65536, 00:08:29.903 "uuid": "9f3bac4a-b19e-434a-a1de-001f678f798b", 00:08:29.903 "assigned_rate_limits": { 00:08:29.903 "rw_ios_per_sec": 0, 00:08:29.903 "rw_mbytes_per_sec": 0, 00:08:29.903 "r_mbytes_per_sec": 0, 00:08:29.903 "w_mbytes_per_sec": 0 00:08:29.903 }, 00:08:29.903 "claimed": true, 00:08:29.903 "claim_type": "exclusive_write", 00:08:29.903 "zoned": false, 00:08:29.903 "supported_io_types": { 00:08:29.903 "read": true, 00:08:29.903 "write": true, 00:08:29.903 "unmap": true, 00:08:29.903 "flush": true, 00:08:29.903 "reset": true, 00:08:29.903 "nvme_admin": false, 00:08:29.903 "nvme_io": false, 00:08:29.903 "nvme_io_md": false, 00:08:29.903 "write_zeroes": true, 00:08:29.903 "zcopy": true, 00:08:29.903 "get_zone_info": false, 00:08:29.903 "zone_management": false, 00:08:29.903 "zone_append": false, 00:08:29.903 "compare": false, 00:08:29.903 "compare_and_write": false, 00:08:29.903 "abort": true, 00:08:29.903 "seek_hole": false, 00:08:29.903 "seek_data": false, 00:08:29.903 "copy": true, 00:08:29.903 "nvme_iov_md": false 00:08:29.903 }, 00:08:29.903 "memory_domains": [ 00:08:29.903 { 00:08:29.903 "dma_device_id": "system", 00:08:29.903 "dma_device_type": 1 00:08:29.903 }, 00:08:29.903 { 00:08:29.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.903 "dma_device_type": 2 00:08:29.903 } 00:08:29.903 ], 00:08:29.903 "driver_specific": {} 00:08:29.903 } 00:08:29.903 ] 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.903 "name": "Existed_Raid", 00:08:29.903 "uuid": "6d16a5a9-2407-447d-8cec-3e1c52664576", 00:08:29.903 "strip_size_kb": 64, 00:08:29.903 "state": "online", 00:08:29.903 "raid_level": "raid0", 00:08:29.903 "superblock": false, 00:08:29.903 "num_base_bdevs": 3, 00:08:29.903 "num_base_bdevs_discovered": 3, 00:08:29.903 "num_base_bdevs_operational": 3, 00:08:29.903 "base_bdevs_list": [ 00:08:29.903 { 00:08:29.903 "name": "BaseBdev1", 00:08:29.903 "uuid": "659e0155-ab78-4ebd-9cd1-41cc0cbea333", 00:08:29.903 "is_configured": true, 00:08:29.903 "data_offset": 0, 00:08:29.903 "data_size": 65536 00:08:29.903 }, 00:08:29.903 { 00:08:29.903 "name": "BaseBdev2", 00:08:29.903 "uuid": "0d160479-05e9-419f-abe5-5a8758ff51d4", 00:08:29.903 "is_configured": true, 00:08:29.903 "data_offset": 0, 00:08:29.903 "data_size": 65536 00:08:29.903 }, 00:08:29.903 { 00:08:29.903 "name": "BaseBdev3", 00:08:29.903 "uuid": "9f3bac4a-b19e-434a-a1de-001f678f798b", 00:08:29.903 "is_configured": true, 00:08:29.903 "data_offset": 0, 00:08:29.903 "data_size": 65536 00:08:29.903 } 00:08:29.903 ] 00:08:29.903 }' 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.903 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 [2024-12-07 17:25:03.609504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.473 "name": "Existed_Raid", 00:08:30.473 "aliases": [ 00:08:30.473 "6d16a5a9-2407-447d-8cec-3e1c52664576" 00:08:30.473 ], 00:08:30.473 "product_name": "Raid Volume", 00:08:30.473 "block_size": 512, 00:08:30.473 "num_blocks": 196608, 00:08:30.473 "uuid": "6d16a5a9-2407-447d-8cec-3e1c52664576", 00:08:30.473 "assigned_rate_limits": { 00:08:30.473 "rw_ios_per_sec": 0, 00:08:30.473 "rw_mbytes_per_sec": 0, 00:08:30.473 "r_mbytes_per_sec": 0, 00:08:30.473 "w_mbytes_per_sec": 0 00:08:30.473 }, 00:08:30.473 "claimed": false, 00:08:30.473 "zoned": false, 00:08:30.473 "supported_io_types": { 00:08:30.473 "read": true, 00:08:30.473 "write": true, 00:08:30.473 "unmap": true, 00:08:30.473 "flush": true, 00:08:30.473 "reset": true, 00:08:30.473 "nvme_admin": false, 00:08:30.473 "nvme_io": false, 00:08:30.473 "nvme_io_md": false, 00:08:30.473 "write_zeroes": true, 00:08:30.473 "zcopy": false, 00:08:30.473 "get_zone_info": false, 00:08:30.473 "zone_management": false, 00:08:30.473 "zone_append": false, 00:08:30.473 "compare": false, 00:08:30.473 "compare_and_write": false, 00:08:30.473 "abort": false, 00:08:30.473 "seek_hole": false, 00:08:30.473 "seek_data": false, 00:08:30.473 "copy": false, 00:08:30.473 "nvme_iov_md": false 00:08:30.473 }, 00:08:30.473 "memory_domains": [ 00:08:30.473 { 00:08:30.473 "dma_device_id": "system", 00:08:30.473 "dma_device_type": 1 00:08:30.473 }, 00:08:30.473 { 00:08:30.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.473 "dma_device_type": 2 00:08:30.473 }, 00:08:30.473 { 00:08:30.473 "dma_device_id": "system", 00:08:30.473 "dma_device_type": 1 00:08:30.473 }, 00:08:30.473 { 00:08:30.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.473 "dma_device_type": 2 00:08:30.473 }, 00:08:30.473 { 00:08:30.473 "dma_device_id": "system", 00:08:30.473 "dma_device_type": 1 00:08:30.473 }, 00:08:30.473 { 00:08:30.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.473 "dma_device_type": 2 00:08:30.473 } 00:08:30.473 ], 00:08:30.473 "driver_specific": { 00:08:30.473 "raid": { 00:08:30.473 "uuid": "6d16a5a9-2407-447d-8cec-3e1c52664576", 00:08:30.473 "strip_size_kb": 64, 00:08:30.473 "state": "online", 00:08:30.473 "raid_level": "raid0", 00:08:30.473 "superblock": false, 00:08:30.473 "num_base_bdevs": 3, 00:08:30.473 "num_base_bdevs_discovered": 3, 00:08:30.473 "num_base_bdevs_operational": 3, 00:08:30.473 "base_bdevs_list": [ 00:08:30.473 { 00:08:30.473 "name": "BaseBdev1", 00:08:30.473 "uuid": "659e0155-ab78-4ebd-9cd1-41cc0cbea333", 00:08:30.473 "is_configured": true, 00:08:30.473 "data_offset": 0, 00:08:30.473 "data_size": 65536 00:08:30.473 }, 00:08:30.473 { 00:08:30.473 "name": "BaseBdev2", 00:08:30.473 "uuid": "0d160479-05e9-419f-abe5-5a8758ff51d4", 00:08:30.473 "is_configured": true, 00:08:30.473 "data_offset": 0, 00:08:30.473 "data_size": 65536 00:08:30.473 }, 00:08:30.473 { 00:08:30.473 "name": "BaseBdev3", 00:08:30.473 "uuid": "9f3bac4a-b19e-434a-a1de-001f678f798b", 00:08:30.473 "is_configured": true, 00:08:30.473 "data_offset": 0, 00:08:30.473 "data_size": 65536 00:08:30.473 } 00:08:30.473 ] 00:08:30.473 } 00:08:30.473 } 00:08:30.473 }' 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:30.473 BaseBdev2 00:08:30.473 BaseBdev3' 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.473 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.732 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:30.732 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.732 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.732 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.733 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.733 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.733 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.733 17:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:30.733 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.733 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.733 [2024-12-07 17:25:03.904802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.733 [2024-12-07 17:25:03.904902] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.733 [2024-12-07 17:25:03.905009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.733 17:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.733 "name": "Existed_Raid", 00:08:30.733 "uuid": "6d16a5a9-2407-447d-8cec-3e1c52664576", 00:08:30.733 "strip_size_kb": 64, 00:08:30.733 "state": "offline", 00:08:30.733 "raid_level": "raid0", 00:08:30.733 "superblock": false, 00:08:30.733 "num_base_bdevs": 3, 00:08:30.733 "num_base_bdevs_discovered": 2, 00:08:30.733 "num_base_bdevs_operational": 2, 00:08:30.733 "base_bdevs_list": [ 00:08:30.733 { 00:08:30.733 "name": null, 00:08:30.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.733 "is_configured": false, 00:08:30.733 "data_offset": 0, 00:08:30.733 "data_size": 65536 00:08:30.733 }, 00:08:30.733 { 00:08:30.733 "name": "BaseBdev2", 00:08:30.733 "uuid": "0d160479-05e9-419f-abe5-5a8758ff51d4", 00:08:30.733 "is_configured": true, 00:08:30.733 "data_offset": 0, 00:08:30.733 "data_size": 65536 00:08:30.733 }, 00:08:30.733 { 00:08:30.733 "name": "BaseBdev3", 00:08:30.733 "uuid": "9f3bac4a-b19e-434a-a1de-001f678f798b", 00:08:30.733 "is_configured": true, 00:08:30.733 "data_offset": 0, 00:08:30.733 "data_size": 65536 00:08:30.733 } 00:08:30.733 ] 00:08:30.733 }' 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.733 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.303 [2024-12-07 17:25:04.514670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.303 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.303 [2024-12-07 17:25:04.662255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:31.303 [2024-12-07 17:25:04.662373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.563 BaseBdev2 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.563 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.563 [ 00:08:31.563 { 00:08:31.563 "name": "BaseBdev2", 00:08:31.563 "aliases": [ 00:08:31.563 "a2937640-dc8f-414d-8fcb-8482405f0ffe" 00:08:31.563 ], 00:08:31.563 "product_name": "Malloc disk", 00:08:31.563 "block_size": 512, 00:08:31.563 "num_blocks": 65536, 00:08:31.563 "uuid": "a2937640-dc8f-414d-8fcb-8482405f0ffe", 00:08:31.564 "assigned_rate_limits": { 00:08:31.564 "rw_ios_per_sec": 0, 00:08:31.564 "rw_mbytes_per_sec": 0, 00:08:31.564 "r_mbytes_per_sec": 0, 00:08:31.564 "w_mbytes_per_sec": 0 00:08:31.564 }, 00:08:31.564 "claimed": false, 00:08:31.564 "zoned": false, 00:08:31.564 "supported_io_types": { 00:08:31.564 "read": true, 00:08:31.564 "write": true, 00:08:31.564 "unmap": true, 00:08:31.564 "flush": true, 00:08:31.564 "reset": true, 00:08:31.564 "nvme_admin": false, 00:08:31.564 "nvme_io": false, 00:08:31.564 "nvme_io_md": false, 00:08:31.564 "write_zeroes": true, 00:08:31.564 "zcopy": true, 00:08:31.564 "get_zone_info": false, 00:08:31.564 "zone_management": false, 00:08:31.564 "zone_append": false, 00:08:31.564 "compare": false, 00:08:31.564 "compare_and_write": false, 00:08:31.564 "abort": true, 00:08:31.564 "seek_hole": false, 00:08:31.564 "seek_data": false, 00:08:31.564 "copy": true, 00:08:31.564 "nvme_iov_md": false 00:08:31.564 }, 00:08:31.564 "memory_domains": [ 00:08:31.564 { 00:08:31.564 "dma_device_id": "system", 00:08:31.564 "dma_device_type": 1 00:08:31.564 }, 00:08:31.564 { 00:08:31.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.564 "dma_device_type": 2 00:08:31.564 } 00:08:31.564 ], 00:08:31.564 "driver_specific": {} 00:08:31.564 } 00:08:31.564 ] 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.564 BaseBdev3 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.564 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.824 [ 00:08:31.824 { 00:08:31.824 "name": "BaseBdev3", 00:08:31.824 "aliases": [ 00:08:31.824 "5362d8a1-6503-4699-8146-fdd16fae2d6c" 00:08:31.824 ], 00:08:31.824 "product_name": "Malloc disk", 00:08:31.824 "block_size": 512, 00:08:31.824 "num_blocks": 65536, 00:08:31.824 "uuid": "5362d8a1-6503-4699-8146-fdd16fae2d6c", 00:08:31.824 "assigned_rate_limits": { 00:08:31.824 "rw_ios_per_sec": 0, 00:08:31.824 "rw_mbytes_per_sec": 0, 00:08:31.824 "r_mbytes_per_sec": 0, 00:08:31.824 "w_mbytes_per_sec": 0 00:08:31.824 }, 00:08:31.824 "claimed": false, 00:08:31.824 "zoned": false, 00:08:31.824 "supported_io_types": { 00:08:31.824 "read": true, 00:08:31.824 "write": true, 00:08:31.824 "unmap": true, 00:08:31.824 "flush": true, 00:08:31.824 "reset": true, 00:08:31.824 "nvme_admin": false, 00:08:31.824 "nvme_io": false, 00:08:31.824 "nvme_io_md": false, 00:08:31.824 "write_zeroes": true, 00:08:31.824 "zcopy": true, 00:08:31.824 "get_zone_info": false, 00:08:31.824 "zone_management": false, 00:08:31.824 "zone_append": false, 00:08:31.824 "compare": false, 00:08:31.824 "compare_and_write": false, 00:08:31.824 "abort": true, 00:08:31.824 "seek_hole": false, 00:08:31.824 "seek_data": false, 00:08:31.824 "copy": true, 00:08:31.824 "nvme_iov_md": false 00:08:31.824 }, 00:08:31.824 "memory_domains": [ 00:08:31.824 { 00:08:31.824 "dma_device_id": "system", 00:08:31.824 "dma_device_type": 1 00:08:31.824 }, 00:08:31.824 { 00:08:31.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.824 "dma_device_type": 2 00:08:31.824 } 00:08:31.824 ], 00:08:31.824 "driver_specific": {} 00:08:31.824 } 00:08:31.824 ] 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.824 [2024-12-07 17:25:04.974116] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.824 [2024-12-07 17:25:04.974200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.824 [2024-12-07 17:25:04.974241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.824 [2024-12-07 17:25:04.976019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.824 17:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.824 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.824 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.824 "name": "Existed_Raid", 00:08:31.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.824 "strip_size_kb": 64, 00:08:31.824 "state": "configuring", 00:08:31.824 "raid_level": "raid0", 00:08:31.824 "superblock": false, 00:08:31.824 "num_base_bdevs": 3, 00:08:31.824 "num_base_bdevs_discovered": 2, 00:08:31.824 "num_base_bdevs_operational": 3, 00:08:31.824 "base_bdevs_list": [ 00:08:31.824 { 00:08:31.824 "name": "BaseBdev1", 00:08:31.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.824 "is_configured": false, 00:08:31.824 "data_offset": 0, 00:08:31.824 "data_size": 0 00:08:31.824 }, 00:08:31.824 { 00:08:31.824 "name": "BaseBdev2", 00:08:31.824 "uuid": "a2937640-dc8f-414d-8fcb-8482405f0ffe", 00:08:31.824 "is_configured": true, 00:08:31.824 "data_offset": 0, 00:08:31.824 "data_size": 65536 00:08:31.824 }, 00:08:31.824 { 00:08:31.824 "name": "BaseBdev3", 00:08:31.824 "uuid": "5362d8a1-6503-4699-8146-fdd16fae2d6c", 00:08:31.824 "is_configured": true, 00:08:31.824 "data_offset": 0, 00:08:31.824 "data_size": 65536 00:08:31.824 } 00:08:31.824 ] 00:08:31.824 }' 00:08:31.824 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.824 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.083 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:32.083 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.083 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.083 [2024-12-07 17:25:05.437352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:32.083 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.083 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.083 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.084 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.342 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.342 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.342 "name": "Existed_Raid", 00:08:32.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.342 "strip_size_kb": 64, 00:08:32.342 "state": "configuring", 00:08:32.342 "raid_level": "raid0", 00:08:32.342 "superblock": false, 00:08:32.342 "num_base_bdevs": 3, 00:08:32.342 "num_base_bdevs_discovered": 1, 00:08:32.342 "num_base_bdevs_operational": 3, 00:08:32.342 "base_bdevs_list": [ 00:08:32.342 { 00:08:32.342 "name": "BaseBdev1", 00:08:32.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.342 "is_configured": false, 00:08:32.342 "data_offset": 0, 00:08:32.342 "data_size": 0 00:08:32.342 }, 00:08:32.342 { 00:08:32.342 "name": null, 00:08:32.342 "uuid": "a2937640-dc8f-414d-8fcb-8482405f0ffe", 00:08:32.342 "is_configured": false, 00:08:32.342 "data_offset": 0, 00:08:32.342 "data_size": 65536 00:08:32.342 }, 00:08:32.342 { 00:08:32.342 "name": "BaseBdev3", 00:08:32.342 "uuid": "5362d8a1-6503-4699-8146-fdd16fae2d6c", 00:08:32.342 "is_configured": true, 00:08:32.342 "data_offset": 0, 00:08:32.342 "data_size": 65536 00:08:32.342 } 00:08:32.342 ] 00:08:32.342 }' 00:08:32.342 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.342 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.601 [2024-12-07 17:25:05.960504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.601 BaseBdev1 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.601 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.860 [ 00:08:32.860 { 00:08:32.860 "name": "BaseBdev1", 00:08:32.860 "aliases": [ 00:08:32.860 "ca9361ad-4dbc-4f66-a6a1-138554ffbc93" 00:08:32.860 ], 00:08:32.860 "product_name": "Malloc disk", 00:08:32.860 "block_size": 512, 00:08:32.860 "num_blocks": 65536, 00:08:32.860 "uuid": "ca9361ad-4dbc-4f66-a6a1-138554ffbc93", 00:08:32.860 "assigned_rate_limits": { 00:08:32.860 "rw_ios_per_sec": 0, 00:08:32.860 "rw_mbytes_per_sec": 0, 00:08:32.860 "r_mbytes_per_sec": 0, 00:08:32.860 "w_mbytes_per_sec": 0 00:08:32.860 }, 00:08:32.860 "claimed": true, 00:08:32.860 "claim_type": "exclusive_write", 00:08:32.860 "zoned": false, 00:08:32.860 "supported_io_types": { 00:08:32.860 "read": true, 00:08:32.860 "write": true, 00:08:32.860 "unmap": true, 00:08:32.860 "flush": true, 00:08:32.860 "reset": true, 00:08:32.860 "nvme_admin": false, 00:08:32.860 "nvme_io": false, 00:08:32.860 "nvme_io_md": false, 00:08:32.860 "write_zeroes": true, 00:08:32.860 "zcopy": true, 00:08:32.860 "get_zone_info": false, 00:08:32.860 "zone_management": false, 00:08:32.860 "zone_append": false, 00:08:32.860 "compare": false, 00:08:32.860 "compare_and_write": false, 00:08:32.860 "abort": true, 00:08:32.860 "seek_hole": false, 00:08:32.860 "seek_data": false, 00:08:32.860 "copy": true, 00:08:32.860 "nvme_iov_md": false 00:08:32.860 }, 00:08:32.860 "memory_domains": [ 00:08:32.860 { 00:08:32.860 "dma_device_id": "system", 00:08:32.860 "dma_device_type": 1 00:08:32.860 }, 00:08:32.860 { 00:08:32.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.860 "dma_device_type": 2 00:08:32.860 } 00:08:32.860 ], 00:08:32.860 "driver_specific": {} 00:08:32.860 } 00:08:32.860 ] 00:08:32.860 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.860 17:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.860 "name": "Existed_Raid", 00:08:32.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.860 "strip_size_kb": 64, 00:08:32.860 "state": "configuring", 00:08:32.860 "raid_level": "raid0", 00:08:32.860 "superblock": false, 00:08:32.860 "num_base_bdevs": 3, 00:08:32.860 "num_base_bdevs_discovered": 2, 00:08:32.860 "num_base_bdevs_operational": 3, 00:08:32.860 "base_bdevs_list": [ 00:08:32.860 { 00:08:32.860 "name": "BaseBdev1", 00:08:32.860 "uuid": "ca9361ad-4dbc-4f66-a6a1-138554ffbc93", 00:08:32.860 "is_configured": true, 00:08:32.860 "data_offset": 0, 00:08:32.860 "data_size": 65536 00:08:32.860 }, 00:08:32.860 { 00:08:32.860 "name": null, 00:08:32.860 "uuid": "a2937640-dc8f-414d-8fcb-8482405f0ffe", 00:08:32.860 "is_configured": false, 00:08:32.860 "data_offset": 0, 00:08:32.860 "data_size": 65536 00:08:32.860 }, 00:08:32.860 { 00:08:32.860 "name": "BaseBdev3", 00:08:32.860 "uuid": "5362d8a1-6503-4699-8146-fdd16fae2d6c", 00:08:32.860 "is_configured": true, 00:08:32.860 "data_offset": 0, 00:08:32.860 "data_size": 65536 00:08:32.860 } 00:08:32.860 ] 00:08:32.860 }' 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.860 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.120 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.120 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.120 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.120 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:33.120 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.380 [2024-12-07 17:25:06.507638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.380 "name": "Existed_Raid", 00:08:33.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.380 "strip_size_kb": 64, 00:08:33.380 "state": "configuring", 00:08:33.380 "raid_level": "raid0", 00:08:33.380 "superblock": false, 00:08:33.380 "num_base_bdevs": 3, 00:08:33.380 "num_base_bdevs_discovered": 1, 00:08:33.380 "num_base_bdevs_operational": 3, 00:08:33.380 "base_bdevs_list": [ 00:08:33.380 { 00:08:33.380 "name": "BaseBdev1", 00:08:33.380 "uuid": "ca9361ad-4dbc-4f66-a6a1-138554ffbc93", 00:08:33.380 "is_configured": true, 00:08:33.380 "data_offset": 0, 00:08:33.380 "data_size": 65536 00:08:33.380 }, 00:08:33.380 { 00:08:33.380 "name": null, 00:08:33.380 "uuid": "a2937640-dc8f-414d-8fcb-8482405f0ffe", 00:08:33.380 "is_configured": false, 00:08:33.380 "data_offset": 0, 00:08:33.380 "data_size": 65536 00:08:33.380 }, 00:08:33.380 { 00:08:33.380 "name": null, 00:08:33.380 "uuid": "5362d8a1-6503-4699-8146-fdd16fae2d6c", 00:08:33.380 "is_configured": false, 00:08:33.380 "data_offset": 0, 00:08:33.380 "data_size": 65536 00:08:33.380 } 00:08:33.380 ] 00:08:33.380 }' 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.380 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.640 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.640 17:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:33.640 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.640 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.640 17:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.640 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:33.640 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:33.640 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.640 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.900 [2024-12-07 17:25:07.022786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.900 "name": "Existed_Raid", 00:08:33.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.900 "strip_size_kb": 64, 00:08:33.900 "state": "configuring", 00:08:33.900 "raid_level": "raid0", 00:08:33.900 "superblock": false, 00:08:33.900 "num_base_bdevs": 3, 00:08:33.900 "num_base_bdevs_discovered": 2, 00:08:33.900 "num_base_bdevs_operational": 3, 00:08:33.900 "base_bdevs_list": [ 00:08:33.900 { 00:08:33.900 "name": "BaseBdev1", 00:08:33.900 "uuid": "ca9361ad-4dbc-4f66-a6a1-138554ffbc93", 00:08:33.900 "is_configured": true, 00:08:33.900 "data_offset": 0, 00:08:33.900 "data_size": 65536 00:08:33.900 }, 00:08:33.900 { 00:08:33.900 "name": null, 00:08:33.900 "uuid": "a2937640-dc8f-414d-8fcb-8482405f0ffe", 00:08:33.900 "is_configured": false, 00:08:33.900 "data_offset": 0, 00:08:33.900 "data_size": 65536 00:08:33.900 }, 00:08:33.900 { 00:08:33.900 "name": "BaseBdev3", 00:08:33.900 "uuid": "5362d8a1-6503-4699-8146-fdd16fae2d6c", 00:08:33.900 "is_configured": true, 00:08:33.900 "data_offset": 0, 00:08:33.900 "data_size": 65536 00:08:33.900 } 00:08:33.900 ] 00:08:33.900 }' 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.900 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.159 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.159 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.159 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.159 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:34.159 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.419 [2024-12-07 17:25:07.545928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.419 "name": "Existed_Raid", 00:08:34.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.419 "strip_size_kb": 64, 00:08:34.419 "state": "configuring", 00:08:34.419 "raid_level": "raid0", 00:08:34.419 "superblock": false, 00:08:34.419 "num_base_bdevs": 3, 00:08:34.419 "num_base_bdevs_discovered": 1, 00:08:34.419 "num_base_bdevs_operational": 3, 00:08:34.419 "base_bdevs_list": [ 00:08:34.419 { 00:08:34.419 "name": null, 00:08:34.419 "uuid": "ca9361ad-4dbc-4f66-a6a1-138554ffbc93", 00:08:34.419 "is_configured": false, 00:08:34.419 "data_offset": 0, 00:08:34.419 "data_size": 65536 00:08:34.419 }, 00:08:34.419 { 00:08:34.419 "name": null, 00:08:34.419 "uuid": "a2937640-dc8f-414d-8fcb-8482405f0ffe", 00:08:34.419 "is_configured": false, 00:08:34.419 "data_offset": 0, 00:08:34.419 "data_size": 65536 00:08:34.419 }, 00:08:34.419 { 00:08:34.419 "name": "BaseBdev3", 00:08:34.419 "uuid": "5362d8a1-6503-4699-8146-fdd16fae2d6c", 00:08:34.419 "is_configured": true, 00:08:34.419 "data_offset": 0, 00:08:34.419 "data_size": 65536 00:08:34.419 } 00:08:34.419 ] 00:08:34.419 }' 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.419 17:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.679 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.679 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:34.679 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.679 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.679 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.939 [2024-12-07 17:25:08.067344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.939 "name": "Existed_Raid", 00:08:34.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.939 "strip_size_kb": 64, 00:08:34.939 "state": "configuring", 00:08:34.939 "raid_level": "raid0", 00:08:34.939 "superblock": false, 00:08:34.939 "num_base_bdevs": 3, 00:08:34.939 "num_base_bdevs_discovered": 2, 00:08:34.939 "num_base_bdevs_operational": 3, 00:08:34.939 "base_bdevs_list": [ 00:08:34.939 { 00:08:34.939 "name": null, 00:08:34.939 "uuid": "ca9361ad-4dbc-4f66-a6a1-138554ffbc93", 00:08:34.939 "is_configured": false, 00:08:34.939 "data_offset": 0, 00:08:34.939 "data_size": 65536 00:08:34.939 }, 00:08:34.939 { 00:08:34.939 "name": "BaseBdev2", 00:08:34.939 "uuid": "a2937640-dc8f-414d-8fcb-8482405f0ffe", 00:08:34.939 "is_configured": true, 00:08:34.939 "data_offset": 0, 00:08:34.939 "data_size": 65536 00:08:34.939 }, 00:08:34.939 { 00:08:34.939 "name": "BaseBdev3", 00:08:34.939 "uuid": "5362d8a1-6503-4699-8146-fdd16fae2d6c", 00:08:34.939 "is_configured": true, 00:08:34.939 "data_offset": 0, 00:08:34.939 "data_size": 65536 00:08:34.939 } 00:08:34.939 ] 00:08:34.939 }' 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.939 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.199 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.199 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.199 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.200 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:35.200 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.459 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ca9361ad-4dbc-4f66-a6a1-138554ffbc93 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.460 [2024-12-07 17:25:08.678315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:35.460 [2024-12-07 17:25:08.678375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:35.460 [2024-12-07 17:25:08.678384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:35.460 [2024-12-07 17:25:08.678647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:35.460 [2024-12-07 17:25:08.678815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:35.460 [2024-12-07 17:25:08.678828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:35.460 [2024-12-07 17:25:08.679107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.460 NewBaseBdev 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.460 [ 00:08:35.460 { 00:08:35.460 "name": "NewBaseBdev", 00:08:35.460 "aliases": [ 00:08:35.460 "ca9361ad-4dbc-4f66-a6a1-138554ffbc93" 00:08:35.460 ], 00:08:35.460 "product_name": "Malloc disk", 00:08:35.460 "block_size": 512, 00:08:35.460 "num_blocks": 65536, 00:08:35.460 "uuid": "ca9361ad-4dbc-4f66-a6a1-138554ffbc93", 00:08:35.460 "assigned_rate_limits": { 00:08:35.460 "rw_ios_per_sec": 0, 00:08:35.460 "rw_mbytes_per_sec": 0, 00:08:35.460 "r_mbytes_per_sec": 0, 00:08:35.460 "w_mbytes_per_sec": 0 00:08:35.460 }, 00:08:35.460 "claimed": true, 00:08:35.460 "claim_type": "exclusive_write", 00:08:35.460 "zoned": false, 00:08:35.460 "supported_io_types": { 00:08:35.460 "read": true, 00:08:35.460 "write": true, 00:08:35.460 "unmap": true, 00:08:35.460 "flush": true, 00:08:35.460 "reset": true, 00:08:35.460 "nvme_admin": false, 00:08:35.460 "nvme_io": false, 00:08:35.460 "nvme_io_md": false, 00:08:35.460 "write_zeroes": true, 00:08:35.460 "zcopy": true, 00:08:35.460 "get_zone_info": false, 00:08:35.460 "zone_management": false, 00:08:35.460 "zone_append": false, 00:08:35.460 "compare": false, 00:08:35.460 "compare_and_write": false, 00:08:35.460 "abort": true, 00:08:35.460 "seek_hole": false, 00:08:35.460 "seek_data": false, 00:08:35.460 "copy": true, 00:08:35.460 "nvme_iov_md": false 00:08:35.460 }, 00:08:35.460 "memory_domains": [ 00:08:35.460 { 00:08:35.460 "dma_device_id": "system", 00:08:35.460 "dma_device_type": 1 00:08:35.460 }, 00:08:35.460 { 00:08:35.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.460 "dma_device_type": 2 00:08:35.460 } 00:08:35.460 ], 00:08:35.460 "driver_specific": {} 00:08:35.460 } 00:08:35.460 ] 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.460 "name": "Existed_Raid", 00:08:35.460 "uuid": "8ebcc872-f022-49c8-90f6-61ba20b46e6e", 00:08:35.460 "strip_size_kb": 64, 00:08:35.460 "state": "online", 00:08:35.460 "raid_level": "raid0", 00:08:35.460 "superblock": false, 00:08:35.460 "num_base_bdevs": 3, 00:08:35.460 "num_base_bdevs_discovered": 3, 00:08:35.460 "num_base_bdevs_operational": 3, 00:08:35.460 "base_bdevs_list": [ 00:08:35.460 { 00:08:35.460 "name": "NewBaseBdev", 00:08:35.460 "uuid": "ca9361ad-4dbc-4f66-a6a1-138554ffbc93", 00:08:35.460 "is_configured": true, 00:08:35.460 "data_offset": 0, 00:08:35.460 "data_size": 65536 00:08:35.460 }, 00:08:35.460 { 00:08:35.460 "name": "BaseBdev2", 00:08:35.460 "uuid": "a2937640-dc8f-414d-8fcb-8482405f0ffe", 00:08:35.460 "is_configured": true, 00:08:35.460 "data_offset": 0, 00:08:35.460 "data_size": 65536 00:08:35.460 }, 00:08:35.460 { 00:08:35.460 "name": "BaseBdev3", 00:08:35.460 "uuid": "5362d8a1-6503-4699-8146-fdd16fae2d6c", 00:08:35.460 "is_configured": true, 00:08:35.460 "data_offset": 0, 00:08:35.460 "data_size": 65536 00:08:35.460 } 00:08:35.460 ] 00:08:35.460 }' 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.460 17:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.043 [2024-12-07 17:25:09.165858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.043 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.043 "name": "Existed_Raid", 00:08:36.043 "aliases": [ 00:08:36.043 "8ebcc872-f022-49c8-90f6-61ba20b46e6e" 00:08:36.043 ], 00:08:36.043 "product_name": "Raid Volume", 00:08:36.043 "block_size": 512, 00:08:36.043 "num_blocks": 196608, 00:08:36.043 "uuid": "8ebcc872-f022-49c8-90f6-61ba20b46e6e", 00:08:36.043 "assigned_rate_limits": { 00:08:36.043 "rw_ios_per_sec": 0, 00:08:36.043 "rw_mbytes_per_sec": 0, 00:08:36.043 "r_mbytes_per_sec": 0, 00:08:36.043 "w_mbytes_per_sec": 0 00:08:36.043 }, 00:08:36.043 "claimed": false, 00:08:36.043 "zoned": false, 00:08:36.043 "supported_io_types": { 00:08:36.043 "read": true, 00:08:36.043 "write": true, 00:08:36.043 "unmap": true, 00:08:36.043 "flush": true, 00:08:36.043 "reset": true, 00:08:36.043 "nvme_admin": false, 00:08:36.043 "nvme_io": false, 00:08:36.043 "nvme_io_md": false, 00:08:36.043 "write_zeroes": true, 00:08:36.043 "zcopy": false, 00:08:36.043 "get_zone_info": false, 00:08:36.043 "zone_management": false, 00:08:36.043 "zone_append": false, 00:08:36.043 "compare": false, 00:08:36.043 "compare_and_write": false, 00:08:36.043 "abort": false, 00:08:36.043 "seek_hole": false, 00:08:36.043 "seek_data": false, 00:08:36.043 "copy": false, 00:08:36.043 "nvme_iov_md": false 00:08:36.043 }, 00:08:36.043 "memory_domains": [ 00:08:36.043 { 00:08:36.043 "dma_device_id": "system", 00:08:36.043 "dma_device_type": 1 00:08:36.043 }, 00:08:36.043 { 00:08:36.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.043 "dma_device_type": 2 00:08:36.043 }, 00:08:36.043 { 00:08:36.043 "dma_device_id": "system", 00:08:36.043 "dma_device_type": 1 00:08:36.043 }, 00:08:36.043 { 00:08:36.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.043 "dma_device_type": 2 00:08:36.043 }, 00:08:36.044 { 00:08:36.044 "dma_device_id": "system", 00:08:36.044 "dma_device_type": 1 00:08:36.044 }, 00:08:36.044 { 00:08:36.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.044 "dma_device_type": 2 00:08:36.044 } 00:08:36.044 ], 00:08:36.044 "driver_specific": { 00:08:36.044 "raid": { 00:08:36.044 "uuid": "8ebcc872-f022-49c8-90f6-61ba20b46e6e", 00:08:36.044 "strip_size_kb": 64, 00:08:36.044 "state": "online", 00:08:36.044 "raid_level": "raid0", 00:08:36.044 "superblock": false, 00:08:36.044 "num_base_bdevs": 3, 00:08:36.044 "num_base_bdevs_discovered": 3, 00:08:36.044 "num_base_bdevs_operational": 3, 00:08:36.044 "base_bdevs_list": [ 00:08:36.044 { 00:08:36.044 "name": "NewBaseBdev", 00:08:36.044 "uuid": "ca9361ad-4dbc-4f66-a6a1-138554ffbc93", 00:08:36.044 "is_configured": true, 00:08:36.044 "data_offset": 0, 00:08:36.044 "data_size": 65536 00:08:36.044 }, 00:08:36.044 { 00:08:36.044 "name": "BaseBdev2", 00:08:36.044 "uuid": "a2937640-dc8f-414d-8fcb-8482405f0ffe", 00:08:36.044 "is_configured": true, 00:08:36.044 "data_offset": 0, 00:08:36.044 "data_size": 65536 00:08:36.044 }, 00:08:36.044 { 00:08:36.044 "name": "BaseBdev3", 00:08:36.044 "uuid": "5362d8a1-6503-4699-8146-fdd16fae2d6c", 00:08:36.044 "is_configured": true, 00:08:36.044 "data_offset": 0, 00:08:36.044 "data_size": 65536 00:08:36.044 } 00:08:36.044 ] 00:08:36.044 } 00:08:36.044 } 00:08:36.044 }' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:36.044 BaseBdev2 00:08:36.044 BaseBdev3' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.044 [2024-12-07 17:25:09.413108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.044 [2024-12-07 17:25:09.413139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.044 [2024-12-07 17:25:09.413222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.044 [2024-12-07 17:25:09.413291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.044 [2024-12-07 17:25:09.413303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63845 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63845 ']' 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63845 00:08:36.044 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:36.304 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.304 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63845 00:08:36.304 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.304 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.304 killing process with pid 63845 00:08:36.304 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63845' 00:08:36.304 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63845 00:08:36.304 [2024-12-07 17:25:09.464456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.304 17:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63845 00:08:36.564 [2024-12-07 17:25:09.770894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:37.945 00:08:37.945 real 0m10.633s 00:08:37.945 user 0m17.008s 00:08:37.945 sys 0m1.766s 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.945 ************************************ 00:08:37.945 END TEST raid_state_function_test 00:08:37.945 ************************************ 00:08:37.945 17:25:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:37.945 17:25:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:37.945 17:25:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.945 17:25:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.945 ************************************ 00:08:37.945 START TEST raid_state_function_test_sb 00:08:37.945 ************************************ 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64472 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64472' 00:08:37.945 Process raid pid: 64472 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64472 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64472 ']' 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.945 17:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.945 [2024-12-07 17:25:11.050507] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:37.945 [2024-12-07 17:25:11.050681] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.945 [2024-12-07 17:25:11.237441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.205 [2024-12-07 17:25:11.354167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.205 [2024-12-07 17:25:11.554107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.205 [2024-12-07 17:25:11.554153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.776 17:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.776 17:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:38.776 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.776 17:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.776 17:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.776 [2024-12-07 17:25:11.910551] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.776 [2024-12-07 17:25:11.910601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.776 [2024-12-07 17:25:11.910612] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.776 [2024-12-07 17:25:11.910621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.776 [2024-12-07 17:25:11.910627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.776 [2024-12-07 17:25:11.910635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.776 17:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.776 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.776 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.776 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.776 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.777 "name": "Existed_Raid", 00:08:38.777 "uuid": "e898c1c1-39ac-4ca3-815c-3637059642bf", 00:08:38.777 "strip_size_kb": 64, 00:08:38.777 "state": "configuring", 00:08:38.777 "raid_level": "raid0", 00:08:38.777 "superblock": true, 00:08:38.777 "num_base_bdevs": 3, 00:08:38.777 "num_base_bdevs_discovered": 0, 00:08:38.777 "num_base_bdevs_operational": 3, 00:08:38.777 "base_bdevs_list": [ 00:08:38.777 { 00:08:38.777 "name": "BaseBdev1", 00:08:38.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.777 "is_configured": false, 00:08:38.777 "data_offset": 0, 00:08:38.777 "data_size": 0 00:08:38.777 }, 00:08:38.777 { 00:08:38.777 "name": "BaseBdev2", 00:08:38.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.777 "is_configured": false, 00:08:38.777 "data_offset": 0, 00:08:38.777 "data_size": 0 00:08:38.777 }, 00:08:38.777 { 00:08:38.777 "name": "BaseBdev3", 00:08:38.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.777 "is_configured": false, 00:08:38.777 "data_offset": 0, 00:08:38.777 "data_size": 0 00:08:38.777 } 00:08:38.777 ] 00:08:38.777 }' 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.777 17:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.037 [2024-12-07 17:25:12.389672] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.037 [2024-12-07 17:25:12.389713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.037 [2024-12-07 17:25:12.401654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.037 [2024-12-07 17:25:12.401696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.037 [2024-12-07 17:25:12.401705] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.037 [2024-12-07 17:25:12.401714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.037 [2024-12-07 17:25:12.401736] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.037 [2024-12-07 17:25:12.401745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.037 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.297 [2024-12-07 17:25:12.448426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.297 BaseBdev1 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.297 [ 00:08:39.297 { 00:08:39.297 "name": "BaseBdev1", 00:08:39.297 "aliases": [ 00:08:39.297 "28fccb74-ae9b-43ad-8e98-6207272b3f21" 00:08:39.297 ], 00:08:39.297 "product_name": "Malloc disk", 00:08:39.297 "block_size": 512, 00:08:39.297 "num_blocks": 65536, 00:08:39.297 "uuid": "28fccb74-ae9b-43ad-8e98-6207272b3f21", 00:08:39.297 "assigned_rate_limits": { 00:08:39.297 "rw_ios_per_sec": 0, 00:08:39.297 "rw_mbytes_per_sec": 0, 00:08:39.297 "r_mbytes_per_sec": 0, 00:08:39.297 "w_mbytes_per_sec": 0 00:08:39.297 }, 00:08:39.297 "claimed": true, 00:08:39.297 "claim_type": "exclusive_write", 00:08:39.297 "zoned": false, 00:08:39.297 "supported_io_types": { 00:08:39.297 "read": true, 00:08:39.297 "write": true, 00:08:39.297 "unmap": true, 00:08:39.297 "flush": true, 00:08:39.297 "reset": true, 00:08:39.297 "nvme_admin": false, 00:08:39.297 "nvme_io": false, 00:08:39.297 "nvme_io_md": false, 00:08:39.297 "write_zeroes": true, 00:08:39.297 "zcopy": true, 00:08:39.297 "get_zone_info": false, 00:08:39.297 "zone_management": false, 00:08:39.297 "zone_append": false, 00:08:39.297 "compare": false, 00:08:39.297 "compare_and_write": false, 00:08:39.297 "abort": true, 00:08:39.297 "seek_hole": false, 00:08:39.297 "seek_data": false, 00:08:39.297 "copy": true, 00:08:39.297 "nvme_iov_md": false 00:08:39.297 }, 00:08:39.297 "memory_domains": [ 00:08:39.297 { 00:08:39.297 "dma_device_id": "system", 00:08:39.297 "dma_device_type": 1 00:08:39.297 }, 00:08:39.297 { 00:08:39.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.297 "dma_device_type": 2 00:08:39.297 } 00:08:39.297 ], 00:08:39.297 "driver_specific": {} 00:08:39.297 } 00:08:39.297 ] 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.297 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.297 "name": "Existed_Raid", 00:08:39.297 "uuid": "02e9547a-251a-4e2b-830e-837eeed5beb8", 00:08:39.297 "strip_size_kb": 64, 00:08:39.297 "state": "configuring", 00:08:39.297 "raid_level": "raid0", 00:08:39.297 "superblock": true, 00:08:39.297 "num_base_bdevs": 3, 00:08:39.297 "num_base_bdevs_discovered": 1, 00:08:39.297 "num_base_bdevs_operational": 3, 00:08:39.297 "base_bdevs_list": [ 00:08:39.297 { 00:08:39.298 "name": "BaseBdev1", 00:08:39.298 "uuid": "28fccb74-ae9b-43ad-8e98-6207272b3f21", 00:08:39.298 "is_configured": true, 00:08:39.298 "data_offset": 2048, 00:08:39.298 "data_size": 63488 00:08:39.298 }, 00:08:39.298 { 00:08:39.298 "name": "BaseBdev2", 00:08:39.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.298 "is_configured": false, 00:08:39.298 "data_offset": 0, 00:08:39.298 "data_size": 0 00:08:39.298 }, 00:08:39.298 { 00:08:39.298 "name": "BaseBdev3", 00:08:39.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.298 "is_configured": false, 00:08:39.298 "data_offset": 0, 00:08:39.298 "data_size": 0 00:08:39.298 } 00:08:39.298 ] 00:08:39.298 }' 00:08:39.298 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.298 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.868 [2024-12-07 17:25:12.943665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.868 [2024-12-07 17:25:12.943725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.868 [2024-12-07 17:25:12.955690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.868 [2024-12-07 17:25:12.957501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.868 [2024-12-07 17:25:12.957540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.868 [2024-12-07 17:25:12.957550] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.868 [2024-12-07 17:25:12.957559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.868 17:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.868 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.868 "name": "Existed_Raid", 00:08:39.868 "uuid": "57f60427-dc45-4bb8-93c6-a805a24ada18", 00:08:39.868 "strip_size_kb": 64, 00:08:39.868 "state": "configuring", 00:08:39.868 "raid_level": "raid0", 00:08:39.868 "superblock": true, 00:08:39.868 "num_base_bdevs": 3, 00:08:39.868 "num_base_bdevs_discovered": 1, 00:08:39.868 "num_base_bdevs_operational": 3, 00:08:39.868 "base_bdevs_list": [ 00:08:39.868 { 00:08:39.868 "name": "BaseBdev1", 00:08:39.868 "uuid": "28fccb74-ae9b-43ad-8e98-6207272b3f21", 00:08:39.868 "is_configured": true, 00:08:39.868 "data_offset": 2048, 00:08:39.868 "data_size": 63488 00:08:39.868 }, 00:08:39.868 { 00:08:39.868 "name": "BaseBdev2", 00:08:39.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.868 "is_configured": false, 00:08:39.868 "data_offset": 0, 00:08:39.868 "data_size": 0 00:08:39.868 }, 00:08:39.868 { 00:08:39.868 "name": "BaseBdev3", 00:08:39.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.868 "is_configured": false, 00:08:39.868 "data_offset": 0, 00:08:39.868 "data_size": 0 00:08:39.868 } 00:08:39.868 ] 00:08:39.868 }' 00:08:39.868 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.868 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.128 [2024-12-07 17:25:13.480333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.128 BaseBdev2 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.128 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.129 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.129 [ 00:08:40.129 { 00:08:40.129 "name": "BaseBdev2", 00:08:40.129 "aliases": [ 00:08:40.129 "75d8effb-fbf9-4443-b1cb-4a8552c906e4" 00:08:40.129 ], 00:08:40.129 "product_name": "Malloc disk", 00:08:40.129 "block_size": 512, 00:08:40.129 "num_blocks": 65536, 00:08:40.129 "uuid": "75d8effb-fbf9-4443-b1cb-4a8552c906e4", 00:08:40.129 "assigned_rate_limits": { 00:08:40.129 "rw_ios_per_sec": 0, 00:08:40.129 "rw_mbytes_per_sec": 0, 00:08:40.129 "r_mbytes_per_sec": 0, 00:08:40.129 "w_mbytes_per_sec": 0 00:08:40.129 }, 00:08:40.129 "claimed": true, 00:08:40.129 "claim_type": "exclusive_write", 00:08:40.389 "zoned": false, 00:08:40.389 "supported_io_types": { 00:08:40.389 "read": true, 00:08:40.389 "write": true, 00:08:40.389 "unmap": true, 00:08:40.389 "flush": true, 00:08:40.389 "reset": true, 00:08:40.389 "nvme_admin": false, 00:08:40.389 "nvme_io": false, 00:08:40.389 "nvme_io_md": false, 00:08:40.389 "write_zeroes": true, 00:08:40.389 "zcopy": true, 00:08:40.389 "get_zone_info": false, 00:08:40.389 "zone_management": false, 00:08:40.389 "zone_append": false, 00:08:40.389 "compare": false, 00:08:40.389 "compare_and_write": false, 00:08:40.389 "abort": true, 00:08:40.389 "seek_hole": false, 00:08:40.389 "seek_data": false, 00:08:40.389 "copy": true, 00:08:40.389 "nvme_iov_md": false 00:08:40.389 }, 00:08:40.389 "memory_domains": [ 00:08:40.389 { 00:08:40.389 "dma_device_id": "system", 00:08:40.389 "dma_device_type": 1 00:08:40.389 }, 00:08:40.389 { 00:08:40.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.389 "dma_device_type": 2 00:08:40.389 } 00:08:40.389 ], 00:08:40.389 "driver_specific": {} 00:08:40.389 } 00:08:40.389 ] 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.389 "name": "Existed_Raid", 00:08:40.389 "uuid": "57f60427-dc45-4bb8-93c6-a805a24ada18", 00:08:40.389 "strip_size_kb": 64, 00:08:40.389 "state": "configuring", 00:08:40.389 "raid_level": "raid0", 00:08:40.389 "superblock": true, 00:08:40.389 "num_base_bdevs": 3, 00:08:40.389 "num_base_bdevs_discovered": 2, 00:08:40.389 "num_base_bdevs_operational": 3, 00:08:40.389 "base_bdevs_list": [ 00:08:40.389 { 00:08:40.389 "name": "BaseBdev1", 00:08:40.389 "uuid": "28fccb74-ae9b-43ad-8e98-6207272b3f21", 00:08:40.389 "is_configured": true, 00:08:40.389 "data_offset": 2048, 00:08:40.389 "data_size": 63488 00:08:40.389 }, 00:08:40.389 { 00:08:40.389 "name": "BaseBdev2", 00:08:40.389 "uuid": "75d8effb-fbf9-4443-b1cb-4a8552c906e4", 00:08:40.389 "is_configured": true, 00:08:40.389 "data_offset": 2048, 00:08:40.389 "data_size": 63488 00:08:40.389 }, 00:08:40.389 { 00:08:40.389 "name": "BaseBdev3", 00:08:40.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.389 "is_configured": false, 00:08:40.389 "data_offset": 0, 00:08:40.389 "data_size": 0 00:08:40.389 } 00:08:40.389 ] 00:08:40.389 }' 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.389 17:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.648 17:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:40.648 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.648 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.907 BaseBdev3 00:08:40.907 [2024-12-07 17:25:14.054034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.907 [2024-12-07 17:25:14.054293] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:40.907 [2024-12-07 17:25:14.054314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.907 [2024-12-07 17:25:14.054582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:40.907 [2024-12-07 17:25:14.054746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:40.907 [2024-12-07 17:25:14.054757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:40.907 [2024-12-07 17:25:14.054896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.907 [ 00:08:40.907 { 00:08:40.907 "name": "BaseBdev3", 00:08:40.907 "aliases": [ 00:08:40.907 "ccfaaa2f-a5bc-45af-a8a0-a0b28166e94e" 00:08:40.907 ], 00:08:40.907 "product_name": "Malloc disk", 00:08:40.907 "block_size": 512, 00:08:40.907 "num_blocks": 65536, 00:08:40.907 "uuid": "ccfaaa2f-a5bc-45af-a8a0-a0b28166e94e", 00:08:40.907 "assigned_rate_limits": { 00:08:40.907 "rw_ios_per_sec": 0, 00:08:40.907 "rw_mbytes_per_sec": 0, 00:08:40.907 "r_mbytes_per_sec": 0, 00:08:40.907 "w_mbytes_per_sec": 0 00:08:40.907 }, 00:08:40.907 "claimed": true, 00:08:40.907 "claim_type": "exclusive_write", 00:08:40.907 "zoned": false, 00:08:40.907 "supported_io_types": { 00:08:40.907 "read": true, 00:08:40.907 "write": true, 00:08:40.907 "unmap": true, 00:08:40.907 "flush": true, 00:08:40.907 "reset": true, 00:08:40.907 "nvme_admin": false, 00:08:40.907 "nvme_io": false, 00:08:40.907 "nvme_io_md": false, 00:08:40.907 "write_zeroes": true, 00:08:40.907 "zcopy": true, 00:08:40.907 "get_zone_info": false, 00:08:40.907 "zone_management": false, 00:08:40.907 "zone_append": false, 00:08:40.907 "compare": false, 00:08:40.907 "compare_and_write": false, 00:08:40.907 "abort": true, 00:08:40.907 "seek_hole": false, 00:08:40.907 "seek_data": false, 00:08:40.907 "copy": true, 00:08:40.907 "nvme_iov_md": false 00:08:40.907 }, 00:08:40.907 "memory_domains": [ 00:08:40.907 { 00:08:40.907 "dma_device_id": "system", 00:08:40.907 "dma_device_type": 1 00:08:40.907 }, 00:08:40.907 { 00:08:40.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.907 "dma_device_type": 2 00:08:40.907 } 00:08:40.907 ], 00:08:40.907 "driver_specific": {} 00:08:40.907 } 00:08:40.907 ] 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.907 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.908 "name": "Existed_Raid", 00:08:40.908 "uuid": "57f60427-dc45-4bb8-93c6-a805a24ada18", 00:08:40.908 "strip_size_kb": 64, 00:08:40.908 "state": "online", 00:08:40.908 "raid_level": "raid0", 00:08:40.908 "superblock": true, 00:08:40.908 "num_base_bdevs": 3, 00:08:40.908 "num_base_bdevs_discovered": 3, 00:08:40.908 "num_base_bdevs_operational": 3, 00:08:40.908 "base_bdevs_list": [ 00:08:40.908 { 00:08:40.908 "name": "BaseBdev1", 00:08:40.908 "uuid": "28fccb74-ae9b-43ad-8e98-6207272b3f21", 00:08:40.908 "is_configured": true, 00:08:40.908 "data_offset": 2048, 00:08:40.908 "data_size": 63488 00:08:40.908 }, 00:08:40.908 { 00:08:40.908 "name": "BaseBdev2", 00:08:40.908 "uuid": "75d8effb-fbf9-4443-b1cb-4a8552c906e4", 00:08:40.908 "is_configured": true, 00:08:40.908 "data_offset": 2048, 00:08:40.908 "data_size": 63488 00:08:40.908 }, 00:08:40.908 { 00:08:40.908 "name": "BaseBdev3", 00:08:40.908 "uuid": "ccfaaa2f-a5bc-45af-a8a0-a0b28166e94e", 00:08:40.908 "is_configured": true, 00:08:40.908 "data_offset": 2048, 00:08:40.908 "data_size": 63488 00:08:40.908 } 00:08:40.908 ] 00:08:40.908 }' 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.908 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.167 [2024-12-07 17:25:14.521555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.167 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.167 "name": "Existed_Raid", 00:08:41.167 "aliases": [ 00:08:41.167 "57f60427-dc45-4bb8-93c6-a805a24ada18" 00:08:41.167 ], 00:08:41.167 "product_name": "Raid Volume", 00:08:41.167 "block_size": 512, 00:08:41.167 "num_blocks": 190464, 00:08:41.167 "uuid": "57f60427-dc45-4bb8-93c6-a805a24ada18", 00:08:41.167 "assigned_rate_limits": { 00:08:41.167 "rw_ios_per_sec": 0, 00:08:41.167 "rw_mbytes_per_sec": 0, 00:08:41.167 "r_mbytes_per_sec": 0, 00:08:41.167 "w_mbytes_per_sec": 0 00:08:41.167 }, 00:08:41.167 "claimed": false, 00:08:41.167 "zoned": false, 00:08:41.167 "supported_io_types": { 00:08:41.167 "read": true, 00:08:41.167 "write": true, 00:08:41.167 "unmap": true, 00:08:41.167 "flush": true, 00:08:41.167 "reset": true, 00:08:41.167 "nvme_admin": false, 00:08:41.167 "nvme_io": false, 00:08:41.167 "nvme_io_md": false, 00:08:41.167 "write_zeroes": true, 00:08:41.167 "zcopy": false, 00:08:41.167 "get_zone_info": false, 00:08:41.167 "zone_management": false, 00:08:41.167 "zone_append": false, 00:08:41.167 "compare": false, 00:08:41.167 "compare_and_write": false, 00:08:41.167 "abort": false, 00:08:41.167 "seek_hole": false, 00:08:41.167 "seek_data": false, 00:08:41.167 "copy": false, 00:08:41.167 "nvme_iov_md": false 00:08:41.167 }, 00:08:41.167 "memory_domains": [ 00:08:41.167 { 00:08:41.167 "dma_device_id": "system", 00:08:41.168 "dma_device_type": 1 00:08:41.168 }, 00:08:41.168 { 00:08:41.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.168 "dma_device_type": 2 00:08:41.168 }, 00:08:41.168 { 00:08:41.168 "dma_device_id": "system", 00:08:41.168 "dma_device_type": 1 00:08:41.168 }, 00:08:41.168 { 00:08:41.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.168 "dma_device_type": 2 00:08:41.168 }, 00:08:41.168 { 00:08:41.168 "dma_device_id": "system", 00:08:41.168 "dma_device_type": 1 00:08:41.168 }, 00:08:41.168 { 00:08:41.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.168 "dma_device_type": 2 00:08:41.168 } 00:08:41.168 ], 00:08:41.168 "driver_specific": { 00:08:41.168 "raid": { 00:08:41.168 "uuid": "57f60427-dc45-4bb8-93c6-a805a24ada18", 00:08:41.168 "strip_size_kb": 64, 00:08:41.168 "state": "online", 00:08:41.168 "raid_level": "raid0", 00:08:41.168 "superblock": true, 00:08:41.168 "num_base_bdevs": 3, 00:08:41.168 "num_base_bdevs_discovered": 3, 00:08:41.168 "num_base_bdevs_operational": 3, 00:08:41.168 "base_bdevs_list": [ 00:08:41.168 { 00:08:41.168 "name": "BaseBdev1", 00:08:41.168 "uuid": "28fccb74-ae9b-43ad-8e98-6207272b3f21", 00:08:41.168 "is_configured": true, 00:08:41.168 "data_offset": 2048, 00:08:41.168 "data_size": 63488 00:08:41.168 }, 00:08:41.168 { 00:08:41.168 "name": "BaseBdev2", 00:08:41.168 "uuid": "75d8effb-fbf9-4443-b1cb-4a8552c906e4", 00:08:41.168 "is_configured": true, 00:08:41.168 "data_offset": 2048, 00:08:41.168 "data_size": 63488 00:08:41.168 }, 00:08:41.168 { 00:08:41.168 "name": "BaseBdev3", 00:08:41.168 "uuid": "ccfaaa2f-a5bc-45af-a8a0-a0b28166e94e", 00:08:41.168 "is_configured": true, 00:08:41.168 "data_offset": 2048, 00:08:41.168 "data_size": 63488 00:08:41.168 } 00:08:41.168 ] 00:08:41.168 } 00:08:41.168 } 00:08:41.168 }' 00:08:41.168 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:41.427 BaseBdev2 00:08:41.427 BaseBdev3' 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.427 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.428 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.428 [2024-12-07 17:25:14.792839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.428 [2024-12-07 17:25:14.792870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.428 [2024-12-07 17:25:14.792925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.688 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.688 "name": "Existed_Raid", 00:08:41.688 "uuid": "57f60427-dc45-4bb8-93c6-a805a24ada18", 00:08:41.688 "strip_size_kb": 64, 00:08:41.688 "state": "offline", 00:08:41.688 "raid_level": "raid0", 00:08:41.688 "superblock": true, 00:08:41.688 "num_base_bdevs": 3, 00:08:41.688 "num_base_bdevs_discovered": 2, 00:08:41.688 "num_base_bdevs_operational": 2, 00:08:41.688 "base_bdevs_list": [ 00:08:41.688 { 00:08:41.688 "name": null, 00:08:41.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.688 "is_configured": false, 00:08:41.688 "data_offset": 0, 00:08:41.688 "data_size": 63488 00:08:41.688 }, 00:08:41.688 { 00:08:41.688 "name": "BaseBdev2", 00:08:41.688 "uuid": "75d8effb-fbf9-4443-b1cb-4a8552c906e4", 00:08:41.688 "is_configured": true, 00:08:41.688 "data_offset": 2048, 00:08:41.688 "data_size": 63488 00:08:41.688 }, 00:08:41.688 { 00:08:41.688 "name": "BaseBdev3", 00:08:41.688 "uuid": "ccfaaa2f-a5bc-45af-a8a0-a0b28166e94e", 00:08:41.688 "is_configured": true, 00:08:41.689 "data_offset": 2048, 00:08:41.689 "data_size": 63488 00:08:41.689 } 00:08:41.689 ] 00:08:41.689 }' 00:08:41.689 17:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.689 17:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.946 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:41.946 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.946 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.946 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.946 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.946 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.946 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.947 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.947 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.947 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:41.947 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.947 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.947 [2024-12-07 17:25:15.317598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.205 [2024-12-07 17:25:15.474756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.205 [2024-12-07 17:25:15.474814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.205 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.464 BaseBdev2 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.464 [ 00:08:42.464 { 00:08:42.464 "name": "BaseBdev2", 00:08:42.464 "aliases": [ 00:08:42.464 "d8e6da20-59fb-4f15-992b-39318ecdc7c9" 00:08:42.464 ], 00:08:42.464 "product_name": "Malloc disk", 00:08:42.464 "block_size": 512, 00:08:42.464 "num_blocks": 65536, 00:08:42.464 "uuid": "d8e6da20-59fb-4f15-992b-39318ecdc7c9", 00:08:42.464 "assigned_rate_limits": { 00:08:42.464 "rw_ios_per_sec": 0, 00:08:42.464 "rw_mbytes_per_sec": 0, 00:08:42.464 "r_mbytes_per_sec": 0, 00:08:42.464 "w_mbytes_per_sec": 0 00:08:42.464 }, 00:08:42.464 "claimed": false, 00:08:42.464 "zoned": false, 00:08:42.464 "supported_io_types": { 00:08:42.464 "read": true, 00:08:42.464 "write": true, 00:08:42.464 "unmap": true, 00:08:42.464 "flush": true, 00:08:42.464 "reset": true, 00:08:42.464 "nvme_admin": false, 00:08:42.464 "nvme_io": false, 00:08:42.464 "nvme_io_md": false, 00:08:42.464 "write_zeroes": true, 00:08:42.464 "zcopy": true, 00:08:42.464 "get_zone_info": false, 00:08:42.464 "zone_management": false, 00:08:42.464 "zone_append": false, 00:08:42.464 "compare": false, 00:08:42.464 "compare_and_write": false, 00:08:42.464 "abort": true, 00:08:42.464 "seek_hole": false, 00:08:42.464 "seek_data": false, 00:08:42.464 "copy": true, 00:08:42.464 "nvme_iov_md": false 00:08:42.464 }, 00:08:42.464 "memory_domains": [ 00:08:42.464 { 00:08:42.464 "dma_device_id": "system", 00:08:42.464 "dma_device_type": 1 00:08:42.464 }, 00:08:42.464 { 00:08:42.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.464 "dma_device_type": 2 00:08:42.464 } 00:08:42.464 ], 00:08:42.464 "driver_specific": {} 00:08:42.464 } 00:08:42.464 ] 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:42.464 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.465 BaseBdev3 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.465 [ 00:08:42.465 { 00:08:42.465 "name": "BaseBdev3", 00:08:42.465 "aliases": [ 00:08:42.465 "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8" 00:08:42.465 ], 00:08:42.465 "product_name": "Malloc disk", 00:08:42.465 "block_size": 512, 00:08:42.465 "num_blocks": 65536, 00:08:42.465 "uuid": "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8", 00:08:42.465 "assigned_rate_limits": { 00:08:42.465 "rw_ios_per_sec": 0, 00:08:42.465 "rw_mbytes_per_sec": 0, 00:08:42.465 "r_mbytes_per_sec": 0, 00:08:42.465 "w_mbytes_per_sec": 0 00:08:42.465 }, 00:08:42.465 "claimed": false, 00:08:42.465 "zoned": false, 00:08:42.465 "supported_io_types": { 00:08:42.465 "read": true, 00:08:42.465 "write": true, 00:08:42.465 "unmap": true, 00:08:42.465 "flush": true, 00:08:42.465 "reset": true, 00:08:42.465 "nvme_admin": false, 00:08:42.465 "nvme_io": false, 00:08:42.465 "nvme_io_md": false, 00:08:42.465 "write_zeroes": true, 00:08:42.465 "zcopy": true, 00:08:42.465 "get_zone_info": false, 00:08:42.465 "zone_management": false, 00:08:42.465 "zone_append": false, 00:08:42.465 "compare": false, 00:08:42.465 "compare_and_write": false, 00:08:42.465 "abort": true, 00:08:42.465 "seek_hole": false, 00:08:42.465 "seek_data": false, 00:08:42.465 "copy": true, 00:08:42.465 "nvme_iov_md": false 00:08:42.465 }, 00:08:42.465 "memory_domains": [ 00:08:42.465 { 00:08:42.465 "dma_device_id": "system", 00:08:42.465 "dma_device_type": 1 00:08:42.465 }, 00:08:42.465 { 00:08:42.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.465 "dma_device_type": 2 00:08:42.465 } 00:08:42.465 ], 00:08:42.465 "driver_specific": {} 00:08:42.465 } 00:08:42.465 ] 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.465 [2024-12-07 17:25:15.780983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.465 [2024-12-07 17:25:15.781092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.465 [2024-12-07 17:25:15.781136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.465 [2024-12-07 17:25:15.782926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.465 "name": "Existed_Raid", 00:08:42.465 "uuid": "fb8b5050-6ca1-409c-888d-33b42f397568", 00:08:42.465 "strip_size_kb": 64, 00:08:42.465 "state": "configuring", 00:08:42.465 "raid_level": "raid0", 00:08:42.465 "superblock": true, 00:08:42.465 "num_base_bdevs": 3, 00:08:42.465 "num_base_bdevs_discovered": 2, 00:08:42.465 "num_base_bdevs_operational": 3, 00:08:42.465 "base_bdevs_list": [ 00:08:42.465 { 00:08:42.465 "name": "BaseBdev1", 00:08:42.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.465 "is_configured": false, 00:08:42.465 "data_offset": 0, 00:08:42.465 "data_size": 0 00:08:42.465 }, 00:08:42.465 { 00:08:42.465 "name": "BaseBdev2", 00:08:42.465 "uuid": "d8e6da20-59fb-4f15-992b-39318ecdc7c9", 00:08:42.465 "is_configured": true, 00:08:42.465 "data_offset": 2048, 00:08:42.465 "data_size": 63488 00:08:42.465 }, 00:08:42.465 { 00:08:42.465 "name": "BaseBdev3", 00:08:42.465 "uuid": "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8", 00:08:42.465 "is_configured": true, 00:08:42.465 "data_offset": 2048, 00:08:42.465 "data_size": 63488 00:08:42.465 } 00:08:42.465 ] 00:08:42.465 }' 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.465 17:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.040 [2024-12-07 17:25:16.204278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.040 "name": "Existed_Raid", 00:08:43.040 "uuid": "fb8b5050-6ca1-409c-888d-33b42f397568", 00:08:43.040 "strip_size_kb": 64, 00:08:43.040 "state": "configuring", 00:08:43.040 "raid_level": "raid0", 00:08:43.040 "superblock": true, 00:08:43.040 "num_base_bdevs": 3, 00:08:43.040 "num_base_bdevs_discovered": 1, 00:08:43.040 "num_base_bdevs_operational": 3, 00:08:43.040 "base_bdevs_list": [ 00:08:43.040 { 00:08:43.040 "name": "BaseBdev1", 00:08:43.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.040 "is_configured": false, 00:08:43.040 "data_offset": 0, 00:08:43.040 "data_size": 0 00:08:43.040 }, 00:08:43.040 { 00:08:43.040 "name": null, 00:08:43.040 "uuid": "d8e6da20-59fb-4f15-992b-39318ecdc7c9", 00:08:43.040 "is_configured": false, 00:08:43.040 "data_offset": 0, 00:08:43.040 "data_size": 63488 00:08:43.040 }, 00:08:43.040 { 00:08:43.040 "name": "BaseBdev3", 00:08:43.040 "uuid": "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8", 00:08:43.040 "is_configured": true, 00:08:43.040 "data_offset": 2048, 00:08:43.040 "data_size": 63488 00:08:43.040 } 00:08:43.040 ] 00:08:43.040 }' 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.040 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.300 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.300 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.300 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.300 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.300 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.300 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:43.300 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.300 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.300 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.560 [2024-12-07 17:25:16.692678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.560 BaseBdev1 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.560 [ 00:08:43.560 { 00:08:43.560 "name": "BaseBdev1", 00:08:43.560 "aliases": [ 00:08:43.560 "9be2d5c3-be5b-447f-989f-76186c7283a4" 00:08:43.560 ], 00:08:43.560 "product_name": "Malloc disk", 00:08:43.560 "block_size": 512, 00:08:43.560 "num_blocks": 65536, 00:08:43.560 "uuid": "9be2d5c3-be5b-447f-989f-76186c7283a4", 00:08:43.560 "assigned_rate_limits": { 00:08:43.560 "rw_ios_per_sec": 0, 00:08:43.560 "rw_mbytes_per_sec": 0, 00:08:43.560 "r_mbytes_per_sec": 0, 00:08:43.560 "w_mbytes_per_sec": 0 00:08:43.560 }, 00:08:43.560 "claimed": true, 00:08:43.560 "claim_type": "exclusive_write", 00:08:43.560 "zoned": false, 00:08:43.560 "supported_io_types": { 00:08:43.560 "read": true, 00:08:43.560 "write": true, 00:08:43.560 "unmap": true, 00:08:43.560 "flush": true, 00:08:43.560 "reset": true, 00:08:43.560 "nvme_admin": false, 00:08:43.560 "nvme_io": false, 00:08:43.560 "nvme_io_md": false, 00:08:43.560 "write_zeroes": true, 00:08:43.560 "zcopy": true, 00:08:43.560 "get_zone_info": false, 00:08:43.560 "zone_management": false, 00:08:43.560 "zone_append": false, 00:08:43.560 "compare": false, 00:08:43.560 "compare_and_write": false, 00:08:43.560 "abort": true, 00:08:43.560 "seek_hole": false, 00:08:43.560 "seek_data": false, 00:08:43.560 "copy": true, 00:08:43.560 "nvme_iov_md": false 00:08:43.560 }, 00:08:43.560 "memory_domains": [ 00:08:43.560 { 00:08:43.560 "dma_device_id": "system", 00:08:43.560 "dma_device_type": 1 00:08:43.560 }, 00:08:43.560 { 00:08:43.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.560 "dma_device_type": 2 00:08:43.560 } 00:08:43.560 ], 00:08:43.560 "driver_specific": {} 00:08:43.560 } 00:08:43.560 ] 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.560 "name": "Existed_Raid", 00:08:43.560 "uuid": "fb8b5050-6ca1-409c-888d-33b42f397568", 00:08:43.560 "strip_size_kb": 64, 00:08:43.560 "state": "configuring", 00:08:43.560 "raid_level": "raid0", 00:08:43.560 "superblock": true, 00:08:43.560 "num_base_bdevs": 3, 00:08:43.560 "num_base_bdevs_discovered": 2, 00:08:43.560 "num_base_bdevs_operational": 3, 00:08:43.560 "base_bdevs_list": [ 00:08:43.560 { 00:08:43.560 "name": "BaseBdev1", 00:08:43.560 "uuid": "9be2d5c3-be5b-447f-989f-76186c7283a4", 00:08:43.560 "is_configured": true, 00:08:43.560 "data_offset": 2048, 00:08:43.560 "data_size": 63488 00:08:43.560 }, 00:08:43.560 { 00:08:43.560 "name": null, 00:08:43.560 "uuid": "d8e6da20-59fb-4f15-992b-39318ecdc7c9", 00:08:43.560 "is_configured": false, 00:08:43.560 "data_offset": 0, 00:08:43.560 "data_size": 63488 00:08:43.560 }, 00:08:43.560 { 00:08:43.560 "name": "BaseBdev3", 00:08:43.560 "uuid": "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8", 00:08:43.560 "is_configured": true, 00:08:43.560 "data_offset": 2048, 00:08:43.560 "data_size": 63488 00:08:43.560 } 00:08:43.560 ] 00:08:43.560 }' 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.560 17:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.819 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.819 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.819 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.819 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:43.819 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.078 [2024-12-07 17:25:17.231810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.078 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.078 "name": "Existed_Raid", 00:08:44.079 "uuid": "fb8b5050-6ca1-409c-888d-33b42f397568", 00:08:44.079 "strip_size_kb": 64, 00:08:44.079 "state": "configuring", 00:08:44.079 "raid_level": "raid0", 00:08:44.079 "superblock": true, 00:08:44.079 "num_base_bdevs": 3, 00:08:44.079 "num_base_bdevs_discovered": 1, 00:08:44.079 "num_base_bdevs_operational": 3, 00:08:44.079 "base_bdevs_list": [ 00:08:44.079 { 00:08:44.079 "name": "BaseBdev1", 00:08:44.079 "uuid": "9be2d5c3-be5b-447f-989f-76186c7283a4", 00:08:44.079 "is_configured": true, 00:08:44.079 "data_offset": 2048, 00:08:44.079 "data_size": 63488 00:08:44.079 }, 00:08:44.079 { 00:08:44.079 "name": null, 00:08:44.079 "uuid": "d8e6da20-59fb-4f15-992b-39318ecdc7c9", 00:08:44.079 "is_configured": false, 00:08:44.079 "data_offset": 0, 00:08:44.079 "data_size": 63488 00:08:44.079 }, 00:08:44.079 { 00:08:44.079 "name": null, 00:08:44.079 "uuid": "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8", 00:08:44.079 "is_configured": false, 00:08:44.079 "data_offset": 0, 00:08:44.079 "data_size": 63488 00:08:44.079 } 00:08:44.079 ] 00:08:44.079 }' 00:08:44.079 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.079 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.337 [2024-12-07 17:25:17.671112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.337 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.595 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.595 "name": "Existed_Raid", 00:08:44.595 "uuid": "fb8b5050-6ca1-409c-888d-33b42f397568", 00:08:44.595 "strip_size_kb": 64, 00:08:44.595 "state": "configuring", 00:08:44.596 "raid_level": "raid0", 00:08:44.596 "superblock": true, 00:08:44.596 "num_base_bdevs": 3, 00:08:44.596 "num_base_bdevs_discovered": 2, 00:08:44.596 "num_base_bdevs_operational": 3, 00:08:44.596 "base_bdevs_list": [ 00:08:44.596 { 00:08:44.596 "name": "BaseBdev1", 00:08:44.596 "uuid": "9be2d5c3-be5b-447f-989f-76186c7283a4", 00:08:44.596 "is_configured": true, 00:08:44.596 "data_offset": 2048, 00:08:44.596 "data_size": 63488 00:08:44.596 }, 00:08:44.596 { 00:08:44.596 "name": null, 00:08:44.596 "uuid": "d8e6da20-59fb-4f15-992b-39318ecdc7c9", 00:08:44.596 "is_configured": false, 00:08:44.596 "data_offset": 0, 00:08:44.596 "data_size": 63488 00:08:44.596 }, 00:08:44.596 { 00:08:44.596 "name": "BaseBdev3", 00:08:44.596 "uuid": "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8", 00:08:44.596 "is_configured": true, 00:08:44.596 "data_offset": 2048, 00:08:44.596 "data_size": 63488 00:08:44.596 } 00:08:44.596 ] 00:08:44.596 }' 00:08:44.596 17:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.596 17:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.855 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:44.855 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.855 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.855 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.855 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.855 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:44.855 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:44.855 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.855 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.855 [2024-12-07 17:25:18.170257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.115 "name": "Existed_Raid", 00:08:45.115 "uuid": "fb8b5050-6ca1-409c-888d-33b42f397568", 00:08:45.115 "strip_size_kb": 64, 00:08:45.115 "state": "configuring", 00:08:45.115 "raid_level": "raid0", 00:08:45.115 "superblock": true, 00:08:45.115 "num_base_bdevs": 3, 00:08:45.115 "num_base_bdevs_discovered": 1, 00:08:45.115 "num_base_bdevs_operational": 3, 00:08:45.115 "base_bdevs_list": [ 00:08:45.115 { 00:08:45.115 "name": null, 00:08:45.115 "uuid": "9be2d5c3-be5b-447f-989f-76186c7283a4", 00:08:45.115 "is_configured": false, 00:08:45.115 "data_offset": 0, 00:08:45.115 "data_size": 63488 00:08:45.115 }, 00:08:45.115 { 00:08:45.115 "name": null, 00:08:45.115 "uuid": "d8e6da20-59fb-4f15-992b-39318ecdc7c9", 00:08:45.115 "is_configured": false, 00:08:45.115 "data_offset": 0, 00:08:45.115 "data_size": 63488 00:08:45.115 }, 00:08:45.115 { 00:08:45.115 "name": "BaseBdev3", 00:08:45.115 "uuid": "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8", 00:08:45.115 "is_configured": true, 00:08:45.115 "data_offset": 2048, 00:08:45.115 "data_size": 63488 00:08:45.115 } 00:08:45.115 ] 00:08:45.115 }' 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.115 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.375 [2024-12-07 17:25:18.724563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.375 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.634 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.634 "name": "Existed_Raid", 00:08:45.634 "uuid": "fb8b5050-6ca1-409c-888d-33b42f397568", 00:08:45.634 "strip_size_kb": 64, 00:08:45.634 "state": "configuring", 00:08:45.634 "raid_level": "raid0", 00:08:45.634 "superblock": true, 00:08:45.634 "num_base_bdevs": 3, 00:08:45.634 "num_base_bdevs_discovered": 2, 00:08:45.634 "num_base_bdevs_operational": 3, 00:08:45.634 "base_bdevs_list": [ 00:08:45.634 { 00:08:45.634 "name": null, 00:08:45.634 "uuid": "9be2d5c3-be5b-447f-989f-76186c7283a4", 00:08:45.634 "is_configured": false, 00:08:45.634 "data_offset": 0, 00:08:45.634 "data_size": 63488 00:08:45.634 }, 00:08:45.634 { 00:08:45.634 "name": "BaseBdev2", 00:08:45.634 "uuid": "d8e6da20-59fb-4f15-992b-39318ecdc7c9", 00:08:45.634 "is_configured": true, 00:08:45.634 "data_offset": 2048, 00:08:45.634 "data_size": 63488 00:08:45.634 }, 00:08:45.634 { 00:08:45.634 "name": "BaseBdev3", 00:08:45.634 "uuid": "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8", 00:08:45.634 "is_configured": true, 00:08:45.634 "data_offset": 2048, 00:08:45.634 "data_size": 63488 00:08:45.634 } 00:08:45.634 ] 00:08:45.634 }' 00:08:45.634 17:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.634 17:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.893 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9be2d5c3-be5b-447f-989f-76186c7283a4 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.152 [2024-12-07 17:25:19.328478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:46.152 [2024-12-07 17:25:19.328799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:46.152 [2024-12-07 17:25:19.328857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:46.152 [2024-12-07 17:25:19.329143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:46.152 [2024-12-07 17:25:19.329334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:46.152 [2024-12-07 17:25:19.329376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:46.152 NewBaseBdev 00:08:46.152 [2024-12-07 17:25:19.329573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.152 [ 00:08:46.152 { 00:08:46.152 "name": "NewBaseBdev", 00:08:46.152 "aliases": [ 00:08:46.152 "9be2d5c3-be5b-447f-989f-76186c7283a4" 00:08:46.152 ], 00:08:46.152 "product_name": "Malloc disk", 00:08:46.152 "block_size": 512, 00:08:46.152 "num_blocks": 65536, 00:08:46.152 "uuid": "9be2d5c3-be5b-447f-989f-76186c7283a4", 00:08:46.152 "assigned_rate_limits": { 00:08:46.152 "rw_ios_per_sec": 0, 00:08:46.152 "rw_mbytes_per_sec": 0, 00:08:46.152 "r_mbytes_per_sec": 0, 00:08:46.152 "w_mbytes_per_sec": 0 00:08:46.152 }, 00:08:46.152 "claimed": true, 00:08:46.152 "claim_type": "exclusive_write", 00:08:46.152 "zoned": false, 00:08:46.152 "supported_io_types": { 00:08:46.152 "read": true, 00:08:46.152 "write": true, 00:08:46.152 "unmap": true, 00:08:46.152 "flush": true, 00:08:46.152 "reset": true, 00:08:46.152 "nvme_admin": false, 00:08:46.152 "nvme_io": false, 00:08:46.152 "nvme_io_md": false, 00:08:46.152 "write_zeroes": true, 00:08:46.152 "zcopy": true, 00:08:46.152 "get_zone_info": false, 00:08:46.152 "zone_management": false, 00:08:46.152 "zone_append": false, 00:08:46.152 "compare": false, 00:08:46.152 "compare_and_write": false, 00:08:46.152 "abort": true, 00:08:46.152 "seek_hole": false, 00:08:46.152 "seek_data": false, 00:08:46.152 "copy": true, 00:08:46.152 "nvme_iov_md": false 00:08:46.152 }, 00:08:46.152 "memory_domains": [ 00:08:46.152 { 00:08:46.152 "dma_device_id": "system", 00:08:46.152 "dma_device_type": 1 00:08:46.152 }, 00:08:46.152 { 00:08:46.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.152 "dma_device_type": 2 00:08:46.152 } 00:08:46.152 ], 00:08:46.152 "driver_specific": {} 00:08:46.152 } 00:08:46.152 ] 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.152 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.152 "name": "Existed_Raid", 00:08:46.152 "uuid": "fb8b5050-6ca1-409c-888d-33b42f397568", 00:08:46.152 "strip_size_kb": 64, 00:08:46.152 "state": "online", 00:08:46.152 "raid_level": "raid0", 00:08:46.152 "superblock": true, 00:08:46.152 "num_base_bdevs": 3, 00:08:46.152 "num_base_bdevs_discovered": 3, 00:08:46.152 "num_base_bdevs_operational": 3, 00:08:46.152 "base_bdevs_list": [ 00:08:46.153 { 00:08:46.153 "name": "NewBaseBdev", 00:08:46.153 "uuid": "9be2d5c3-be5b-447f-989f-76186c7283a4", 00:08:46.153 "is_configured": true, 00:08:46.153 "data_offset": 2048, 00:08:46.153 "data_size": 63488 00:08:46.153 }, 00:08:46.153 { 00:08:46.153 "name": "BaseBdev2", 00:08:46.153 "uuid": "d8e6da20-59fb-4f15-992b-39318ecdc7c9", 00:08:46.153 "is_configured": true, 00:08:46.153 "data_offset": 2048, 00:08:46.153 "data_size": 63488 00:08:46.153 }, 00:08:46.153 { 00:08:46.153 "name": "BaseBdev3", 00:08:46.153 "uuid": "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8", 00:08:46.153 "is_configured": true, 00:08:46.153 "data_offset": 2048, 00:08:46.153 "data_size": 63488 00:08:46.153 } 00:08:46.153 ] 00:08:46.153 }' 00:08:46.153 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.153 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.721 [2024-12-07 17:25:19.820032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.721 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.721 "name": "Existed_Raid", 00:08:46.722 "aliases": [ 00:08:46.722 "fb8b5050-6ca1-409c-888d-33b42f397568" 00:08:46.722 ], 00:08:46.722 "product_name": "Raid Volume", 00:08:46.722 "block_size": 512, 00:08:46.722 "num_blocks": 190464, 00:08:46.722 "uuid": "fb8b5050-6ca1-409c-888d-33b42f397568", 00:08:46.722 "assigned_rate_limits": { 00:08:46.722 "rw_ios_per_sec": 0, 00:08:46.722 "rw_mbytes_per_sec": 0, 00:08:46.722 "r_mbytes_per_sec": 0, 00:08:46.722 "w_mbytes_per_sec": 0 00:08:46.722 }, 00:08:46.722 "claimed": false, 00:08:46.722 "zoned": false, 00:08:46.722 "supported_io_types": { 00:08:46.722 "read": true, 00:08:46.722 "write": true, 00:08:46.722 "unmap": true, 00:08:46.722 "flush": true, 00:08:46.722 "reset": true, 00:08:46.722 "nvme_admin": false, 00:08:46.722 "nvme_io": false, 00:08:46.722 "nvme_io_md": false, 00:08:46.722 "write_zeroes": true, 00:08:46.722 "zcopy": false, 00:08:46.722 "get_zone_info": false, 00:08:46.722 "zone_management": false, 00:08:46.722 "zone_append": false, 00:08:46.722 "compare": false, 00:08:46.722 "compare_and_write": false, 00:08:46.722 "abort": false, 00:08:46.722 "seek_hole": false, 00:08:46.722 "seek_data": false, 00:08:46.722 "copy": false, 00:08:46.722 "nvme_iov_md": false 00:08:46.722 }, 00:08:46.722 "memory_domains": [ 00:08:46.722 { 00:08:46.722 "dma_device_id": "system", 00:08:46.722 "dma_device_type": 1 00:08:46.722 }, 00:08:46.722 { 00:08:46.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.722 "dma_device_type": 2 00:08:46.722 }, 00:08:46.722 { 00:08:46.722 "dma_device_id": "system", 00:08:46.722 "dma_device_type": 1 00:08:46.722 }, 00:08:46.722 { 00:08:46.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.722 "dma_device_type": 2 00:08:46.722 }, 00:08:46.722 { 00:08:46.722 "dma_device_id": "system", 00:08:46.722 "dma_device_type": 1 00:08:46.722 }, 00:08:46.722 { 00:08:46.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.722 "dma_device_type": 2 00:08:46.722 } 00:08:46.722 ], 00:08:46.722 "driver_specific": { 00:08:46.722 "raid": { 00:08:46.722 "uuid": "fb8b5050-6ca1-409c-888d-33b42f397568", 00:08:46.722 "strip_size_kb": 64, 00:08:46.722 "state": "online", 00:08:46.722 "raid_level": "raid0", 00:08:46.722 "superblock": true, 00:08:46.722 "num_base_bdevs": 3, 00:08:46.722 "num_base_bdevs_discovered": 3, 00:08:46.722 "num_base_bdevs_operational": 3, 00:08:46.722 "base_bdevs_list": [ 00:08:46.722 { 00:08:46.722 "name": "NewBaseBdev", 00:08:46.722 "uuid": "9be2d5c3-be5b-447f-989f-76186c7283a4", 00:08:46.722 "is_configured": true, 00:08:46.722 "data_offset": 2048, 00:08:46.722 "data_size": 63488 00:08:46.722 }, 00:08:46.722 { 00:08:46.722 "name": "BaseBdev2", 00:08:46.722 "uuid": "d8e6da20-59fb-4f15-992b-39318ecdc7c9", 00:08:46.722 "is_configured": true, 00:08:46.722 "data_offset": 2048, 00:08:46.722 "data_size": 63488 00:08:46.722 }, 00:08:46.722 { 00:08:46.722 "name": "BaseBdev3", 00:08:46.722 "uuid": "f6f0e51c-bb3f-4512-bf54-5ab15bf8f0e8", 00:08:46.722 "is_configured": true, 00:08:46.722 "data_offset": 2048, 00:08:46.722 "data_size": 63488 00:08:46.722 } 00:08:46.722 ] 00:08:46.722 } 00:08:46.722 } 00:08:46.722 }' 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:46.722 BaseBdev2 00:08:46.722 BaseBdev3' 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.722 17:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.722 [2024-12-07 17:25:20.083230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.722 [2024-12-07 17:25:20.083304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.722 [2024-12-07 17:25:20.083425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.722 [2024-12-07 17:25:20.083516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.722 [2024-12-07 17:25:20.083571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64472 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64472 ']' 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64472 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.722 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64472 00:08:46.982 killing process with pid 64472 00:08:46.982 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.982 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.982 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64472' 00:08:46.982 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64472 00:08:46.982 [2024-12-07 17:25:20.127782] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.982 17:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64472 00:08:47.241 [2024-12-07 17:25:20.423666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.179 ************************************ 00:08:48.179 END TEST raid_state_function_test_sb 00:08:48.179 ************************************ 00:08:48.179 17:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:48.179 00:08:48.179 real 0m10.572s 00:08:48.179 user 0m16.816s 00:08:48.179 sys 0m1.830s 00:08:48.179 17:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.179 17:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.440 17:25:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:48.440 17:25:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:48.440 17:25:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.440 17:25:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.440 ************************************ 00:08:48.440 START TEST raid_superblock_test 00:08:48.440 ************************************ 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65091 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65091 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65091 ']' 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.440 17:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.440 [2024-12-07 17:25:21.698360] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:48.440 [2024-12-07 17:25:21.698566] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65091 ] 00:08:48.700 [2024-12-07 17:25:21.871405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.700 [2024-12-07 17:25:21.981951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.960 [2024-12-07 17:25:22.179109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.960 [2024-12-07 17:25:22.179168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.221 malloc1 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.221 [2024-12-07 17:25:22.581292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:49.221 [2024-12-07 17:25:22.581462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.221 [2024-12-07 17:25:22.581505] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:49.221 [2024-12-07 17:25:22.581536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.221 [2024-12-07 17:25:22.583620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.221 [2024-12-07 17:25:22.583701] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:49.221 pt1 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.221 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.482 malloc2 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.482 [2024-12-07 17:25:22.639060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:49.482 [2024-12-07 17:25:22.639129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.482 [2024-12-07 17:25:22.639154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:49.482 [2024-12-07 17:25:22.639163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.482 [2024-12-07 17:25:22.641150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.482 [2024-12-07 17:25:22.641253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:49.482 pt2 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.482 malloc3 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.482 [2024-12-07 17:25:22.709913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:49.482 [2024-12-07 17:25:22.710070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.482 [2024-12-07 17:25:22.710126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:49.482 [2024-12-07 17:25:22.710163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.482 [2024-12-07 17:25:22.712412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.482 [2024-12-07 17:25:22.712486] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:49.482 pt3 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:49.482 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.483 [2024-12-07 17:25:22.721949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:49.483 [2024-12-07 17:25:22.723746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:49.483 [2024-12-07 17:25:22.723850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:49.483 [2024-12-07 17:25:22.724069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:49.483 [2024-12-07 17:25:22.724121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:49.483 [2024-12-07 17:25:22.724387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:49.483 [2024-12-07 17:25:22.724587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:49.483 [2024-12-07 17:25:22.724637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:49.483 [2024-12-07 17:25:22.724832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.483 "name": "raid_bdev1", 00:08:49.483 "uuid": "51c33f74-9717-4471-8597-2f75b9237a5c", 00:08:49.483 "strip_size_kb": 64, 00:08:49.483 "state": "online", 00:08:49.483 "raid_level": "raid0", 00:08:49.483 "superblock": true, 00:08:49.483 "num_base_bdevs": 3, 00:08:49.483 "num_base_bdevs_discovered": 3, 00:08:49.483 "num_base_bdevs_operational": 3, 00:08:49.483 "base_bdevs_list": [ 00:08:49.483 { 00:08:49.483 "name": "pt1", 00:08:49.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.483 "is_configured": true, 00:08:49.483 "data_offset": 2048, 00:08:49.483 "data_size": 63488 00:08:49.483 }, 00:08:49.483 { 00:08:49.483 "name": "pt2", 00:08:49.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.483 "is_configured": true, 00:08:49.483 "data_offset": 2048, 00:08:49.483 "data_size": 63488 00:08:49.483 }, 00:08:49.483 { 00:08:49.483 "name": "pt3", 00:08:49.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:49.483 "is_configured": true, 00:08:49.483 "data_offset": 2048, 00:08:49.483 "data_size": 63488 00:08:49.483 } 00:08:49.483 ] 00:08:49.483 }' 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.483 17:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.052 [2024-12-07 17:25:23.209420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.052 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.052 "name": "raid_bdev1", 00:08:50.052 "aliases": [ 00:08:50.052 "51c33f74-9717-4471-8597-2f75b9237a5c" 00:08:50.052 ], 00:08:50.052 "product_name": "Raid Volume", 00:08:50.052 "block_size": 512, 00:08:50.052 "num_blocks": 190464, 00:08:50.052 "uuid": "51c33f74-9717-4471-8597-2f75b9237a5c", 00:08:50.052 "assigned_rate_limits": { 00:08:50.052 "rw_ios_per_sec": 0, 00:08:50.052 "rw_mbytes_per_sec": 0, 00:08:50.052 "r_mbytes_per_sec": 0, 00:08:50.052 "w_mbytes_per_sec": 0 00:08:50.052 }, 00:08:50.052 "claimed": false, 00:08:50.052 "zoned": false, 00:08:50.052 "supported_io_types": { 00:08:50.052 "read": true, 00:08:50.052 "write": true, 00:08:50.052 "unmap": true, 00:08:50.052 "flush": true, 00:08:50.052 "reset": true, 00:08:50.052 "nvme_admin": false, 00:08:50.052 "nvme_io": false, 00:08:50.052 "nvme_io_md": false, 00:08:50.052 "write_zeroes": true, 00:08:50.052 "zcopy": false, 00:08:50.052 "get_zone_info": false, 00:08:50.052 "zone_management": false, 00:08:50.052 "zone_append": false, 00:08:50.052 "compare": false, 00:08:50.052 "compare_and_write": false, 00:08:50.052 "abort": false, 00:08:50.052 "seek_hole": false, 00:08:50.052 "seek_data": false, 00:08:50.052 "copy": false, 00:08:50.052 "nvme_iov_md": false 00:08:50.052 }, 00:08:50.052 "memory_domains": [ 00:08:50.052 { 00:08:50.052 "dma_device_id": "system", 00:08:50.052 "dma_device_type": 1 00:08:50.052 }, 00:08:50.052 { 00:08:50.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.052 "dma_device_type": 2 00:08:50.052 }, 00:08:50.052 { 00:08:50.052 "dma_device_id": "system", 00:08:50.052 "dma_device_type": 1 00:08:50.052 }, 00:08:50.052 { 00:08:50.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.052 "dma_device_type": 2 00:08:50.052 }, 00:08:50.052 { 00:08:50.052 "dma_device_id": "system", 00:08:50.052 "dma_device_type": 1 00:08:50.052 }, 00:08:50.052 { 00:08:50.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.052 "dma_device_type": 2 00:08:50.052 } 00:08:50.052 ], 00:08:50.052 "driver_specific": { 00:08:50.052 "raid": { 00:08:50.052 "uuid": "51c33f74-9717-4471-8597-2f75b9237a5c", 00:08:50.052 "strip_size_kb": 64, 00:08:50.052 "state": "online", 00:08:50.052 "raid_level": "raid0", 00:08:50.052 "superblock": true, 00:08:50.052 "num_base_bdevs": 3, 00:08:50.052 "num_base_bdevs_discovered": 3, 00:08:50.052 "num_base_bdevs_operational": 3, 00:08:50.052 "base_bdevs_list": [ 00:08:50.052 { 00:08:50.052 "name": "pt1", 00:08:50.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.052 "is_configured": true, 00:08:50.052 "data_offset": 2048, 00:08:50.052 "data_size": 63488 00:08:50.052 }, 00:08:50.052 { 00:08:50.052 "name": "pt2", 00:08:50.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.052 "is_configured": true, 00:08:50.053 "data_offset": 2048, 00:08:50.053 "data_size": 63488 00:08:50.053 }, 00:08:50.053 { 00:08:50.053 "name": "pt3", 00:08:50.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.053 "is_configured": true, 00:08:50.053 "data_offset": 2048, 00:08:50.053 "data_size": 63488 00:08:50.053 } 00:08:50.053 ] 00:08:50.053 } 00:08:50.053 } 00:08:50.053 }' 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:50.053 pt2 00:08:50.053 pt3' 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.053 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.313 [2024-12-07 17:25:23.480909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=51c33f74-9717-4471-8597-2f75b9237a5c 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 51c33f74-9717-4471-8597-2f75b9237a5c ']' 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.313 [2024-12-07 17:25:23.528534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.313 [2024-12-07 17:25:23.528649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.313 [2024-12-07 17:25:23.528794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.313 [2024-12-07 17:25:23.528890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.313 [2024-12-07 17:25:23.528950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.313 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.313 [2024-12-07 17:25:23.664341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:50.313 [2024-12-07 17:25:23.666230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:50.313 [2024-12-07 17:25:23.666280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:50.313 [2024-12-07 17:25:23.666332] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:50.313 [2024-12-07 17:25:23.666383] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:50.313 [2024-12-07 17:25:23.666403] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:50.313 [2024-12-07 17:25:23.666419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.313 [2024-12-07 17:25:23.666431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:50.313 request: 00:08:50.313 { 00:08:50.313 "name": "raid_bdev1", 00:08:50.313 "raid_level": "raid0", 00:08:50.313 "base_bdevs": [ 00:08:50.313 "malloc1", 00:08:50.313 "malloc2", 00:08:50.313 "malloc3" 00:08:50.313 ], 00:08:50.314 "strip_size_kb": 64, 00:08:50.314 "superblock": false, 00:08:50.314 "method": "bdev_raid_create", 00:08:50.314 "req_id": 1 00:08:50.314 } 00:08:50.314 Got JSON-RPC error response 00:08:50.314 response: 00:08:50.314 { 00:08:50.314 "code": -17, 00:08:50.314 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:50.314 } 00:08:50.314 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:50.314 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:50.314 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:50.314 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:50.314 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:50.314 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.314 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:50.314 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.314 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.314 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.574 [2024-12-07 17:25:23.732180] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.574 [2024-12-07 17:25:23.732343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.574 [2024-12-07 17:25:23.732382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:50.574 [2024-12-07 17:25:23.732430] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.574 [2024-12-07 17:25:23.734711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.574 [2024-12-07 17:25:23.734783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:50.574 [2024-12-07 17:25:23.734902] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:50.574 [2024-12-07 17:25:23.734994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:50.574 pt1 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.574 "name": "raid_bdev1", 00:08:50.574 "uuid": "51c33f74-9717-4471-8597-2f75b9237a5c", 00:08:50.574 "strip_size_kb": 64, 00:08:50.574 "state": "configuring", 00:08:50.574 "raid_level": "raid0", 00:08:50.574 "superblock": true, 00:08:50.574 "num_base_bdevs": 3, 00:08:50.574 "num_base_bdevs_discovered": 1, 00:08:50.574 "num_base_bdevs_operational": 3, 00:08:50.574 "base_bdevs_list": [ 00:08:50.574 { 00:08:50.574 "name": "pt1", 00:08:50.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.574 "is_configured": true, 00:08:50.574 "data_offset": 2048, 00:08:50.574 "data_size": 63488 00:08:50.574 }, 00:08:50.574 { 00:08:50.574 "name": null, 00:08:50.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.574 "is_configured": false, 00:08:50.574 "data_offset": 2048, 00:08:50.574 "data_size": 63488 00:08:50.574 }, 00:08:50.574 { 00:08:50.574 "name": null, 00:08:50.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.574 "is_configured": false, 00:08:50.574 "data_offset": 2048, 00:08:50.574 "data_size": 63488 00:08:50.574 } 00:08:50.574 ] 00:08:50.574 }' 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.574 17:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.834 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:50.834 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:50.834 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.834 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.094 [2024-12-07 17:25:24.215362] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.094 [2024-12-07 17:25:24.215445] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.094 [2024-12-07 17:25:24.215473] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:51.094 [2024-12-07 17:25:24.215482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.094 [2024-12-07 17:25:24.215942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.094 [2024-12-07 17:25:24.215961] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.094 [2024-12-07 17:25:24.216055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:51.094 [2024-12-07 17:25:24.216084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:51.094 pt2 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.094 [2024-12-07 17:25:24.227310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.094 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.094 "name": "raid_bdev1", 00:08:51.094 "uuid": "51c33f74-9717-4471-8597-2f75b9237a5c", 00:08:51.094 "strip_size_kb": 64, 00:08:51.094 "state": "configuring", 00:08:51.094 "raid_level": "raid0", 00:08:51.094 "superblock": true, 00:08:51.094 "num_base_bdevs": 3, 00:08:51.094 "num_base_bdevs_discovered": 1, 00:08:51.094 "num_base_bdevs_operational": 3, 00:08:51.094 "base_bdevs_list": [ 00:08:51.094 { 00:08:51.094 "name": "pt1", 00:08:51.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.094 "is_configured": true, 00:08:51.094 "data_offset": 2048, 00:08:51.094 "data_size": 63488 00:08:51.094 }, 00:08:51.094 { 00:08:51.094 "name": null, 00:08:51.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.094 "is_configured": false, 00:08:51.094 "data_offset": 0, 00:08:51.094 "data_size": 63488 00:08:51.094 }, 00:08:51.094 { 00:08:51.094 "name": null, 00:08:51.094 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.095 "is_configured": false, 00:08:51.095 "data_offset": 2048, 00:08:51.095 "data_size": 63488 00:08:51.095 } 00:08:51.095 ] 00:08:51.095 }' 00:08:51.095 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.095 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.352 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:51.352 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:51.352 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:51.352 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.352 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.352 [2024-12-07 17:25:24.666557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.352 [2024-12-07 17:25:24.666709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.352 [2024-12-07 17:25:24.666746] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:51.352 [2024-12-07 17:25:24.666776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.352 [2024-12-07 17:25:24.667315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.352 [2024-12-07 17:25:24.667378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.352 [2024-12-07 17:25:24.667491] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:51.353 [2024-12-07 17:25:24.667544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:51.353 pt2 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.353 [2024-12-07 17:25:24.678509] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:51.353 [2024-12-07 17:25:24.678599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.353 [2024-12-07 17:25:24.678627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:51.353 [2024-12-07 17:25:24.678655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.353 [2024-12-07 17:25:24.679057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.353 [2024-12-07 17:25:24.679117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:51.353 [2024-12-07 17:25:24.679207] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:51.353 [2024-12-07 17:25:24.679256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:51.353 [2024-12-07 17:25:24.679404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:51.353 [2024-12-07 17:25:24.679443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:51.353 [2024-12-07 17:25:24.679713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:51.353 [2024-12-07 17:25:24.679892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:51.353 [2024-12-07 17:25:24.679945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:51.353 [2024-12-07 17:25:24.680137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.353 pt3 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.353 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.611 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.611 "name": "raid_bdev1", 00:08:51.611 "uuid": "51c33f74-9717-4471-8597-2f75b9237a5c", 00:08:51.611 "strip_size_kb": 64, 00:08:51.611 "state": "online", 00:08:51.611 "raid_level": "raid0", 00:08:51.611 "superblock": true, 00:08:51.611 "num_base_bdevs": 3, 00:08:51.611 "num_base_bdevs_discovered": 3, 00:08:51.611 "num_base_bdevs_operational": 3, 00:08:51.611 "base_bdevs_list": [ 00:08:51.611 { 00:08:51.611 "name": "pt1", 00:08:51.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.611 "is_configured": true, 00:08:51.611 "data_offset": 2048, 00:08:51.611 "data_size": 63488 00:08:51.611 }, 00:08:51.611 { 00:08:51.611 "name": "pt2", 00:08:51.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.611 "is_configured": true, 00:08:51.611 "data_offset": 2048, 00:08:51.611 "data_size": 63488 00:08:51.611 }, 00:08:51.611 { 00:08:51.611 "name": "pt3", 00:08:51.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.611 "is_configured": true, 00:08:51.611 "data_offset": 2048, 00:08:51.611 "data_size": 63488 00:08:51.611 } 00:08:51.611 ] 00:08:51.611 }' 00:08:51.611 17:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.611 17:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.870 [2024-12-07 17:25:25.086169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.870 "name": "raid_bdev1", 00:08:51.870 "aliases": [ 00:08:51.870 "51c33f74-9717-4471-8597-2f75b9237a5c" 00:08:51.870 ], 00:08:51.870 "product_name": "Raid Volume", 00:08:51.870 "block_size": 512, 00:08:51.870 "num_blocks": 190464, 00:08:51.870 "uuid": "51c33f74-9717-4471-8597-2f75b9237a5c", 00:08:51.870 "assigned_rate_limits": { 00:08:51.870 "rw_ios_per_sec": 0, 00:08:51.870 "rw_mbytes_per_sec": 0, 00:08:51.870 "r_mbytes_per_sec": 0, 00:08:51.870 "w_mbytes_per_sec": 0 00:08:51.870 }, 00:08:51.870 "claimed": false, 00:08:51.870 "zoned": false, 00:08:51.870 "supported_io_types": { 00:08:51.870 "read": true, 00:08:51.870 "write": true, 00:08:51.870 "unmap": true, 00:08:51.870 "flush": true, 00:08:51.870 "reset": true, 00:08:51.870 "nvme_admin": false, 00:08:51.870 "nvme_io": false, 00:08:51.870 "nvme_io_md": false, 00:08:51.870 "write_zeroes": true, 00:08:51.870 "zcopy": false, 00:08:51.870 "get_zone_info": false, 00:08:51.870 "zone_management": false, 00:08:51.870 "zone_append": false, 00:08:51.870 "compare": false, 00:08:51.870 "compare_and_write": false, 00:08:51.870 "abort": false, 00:08:51.870 "seek_hole": false, 00:08:51.870 "seek_data": false, 00:08:51.870 "copy": false, 00:08:51.870 "nvme_iov_md": false 00:08:51.870 }, 00:08:51.870 "memory_domains": [ 00:08:51.870 { 00:08:51.870 "dma_device_id": "system", 00:08:51.870 "dma_device_type": 1 00:08:51.870 }, 00:08:51.870 { 00:08:51.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.870 "dma_device_type": 2 00:08:51.870 }, 00:08:51.870 { 00:08:51.870 "dma_device_id": "system", 00:08:51.870 "dma_device_type": 1 00:08:51.870 }, 00:08:51.870 { 00:08:51.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.870 "dma_device_type": 2 00:08:51.870 }, 00:08:51.870 { 00:08:51.870 "dma_device_id": "system", 00:08:51.870 "dma_device_type": 1 00:08:51.870 }, 00:08:51.870 { 00:08:51.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.870 "dma_device_type": 2 00:08:51.870 } 00:08:51.870 ], 00:08:51.870 "driver_specific": { 00:08:51.870 "raid": { 00:08:51.870 "uuid": "51c33f74-9717-4471-8597-2f75b9237a5c", 00:08:51.870 "strip_size_kb": 64, 00:08:51.870 "state": "online", 00:08:51.870 "raid_level": "raid0", 00:08:51.870 "superblock": true, 00:08:51.870 "num_base_bdevs": 3, 00:08:51.870 "num_base_bdevs_discovered": 3, 00:08:51.870 "num_base_bdevs_operational": 3, 00:08:51.870 "base_bdevs_list": [ 00:08:51.870 { 00:08:51.870 "name": "pt1", 00:08:51.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.870 "is_configured": true, 00:08:51.870 "data_offset": 2048, 00:08:51.870 "data_size": 63488 00:08:51.870 }, 00:08:51.870 { 00:08:51.870 "name": "pt2", 00:08:51.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.870 "is_configured": true, 00:08:51.870 "data_offset": 2048, 00:08:51.870 "data_size": 63488 00:08:51.870 }, 00:08:51.870 { 00:08:51.870 "name": "pt3", 00:08:51.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.870 "is_configured": true, 00:08:51.870 "data_offset": 2048, 00:08:51.870 "data_size": 63488 00:08:51.870 } 00:08:51.870 ] 00:08:51.870 } 00:08:51.870 } 00:08:51.870 }' 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.870 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:51.870 pt2 00:08:51.870 pt3' 00:08:51.871 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.871 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.871 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.871 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.871 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:51.871 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.871 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.871 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.130 [2024-12-07 17:25:25.357625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 51c33f74-9717-4471-8597-2f75b9237a5c '!=' 51c33f74-9717-4471-8597-2f75b9237a5c ']' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65091 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65091 ']' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65091 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65091 00:08:52.130 killing process with pid 65091 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65091' 00:08:52.130 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65091 00:08:52.130 [2024-12-07 17:25:25.442642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.130 [2024-12-07 17:25:25.442751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.130 [2024-12-07 17:25:25.442815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 17:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65091 00:08:52.130 ee all in destruct 00:08:52.130 [2024-12-07 17:25:25.442829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:52.389 [2024-12-07 17:25:25.732395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.779 17:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:53.779 00:08:53.779 real 0m5.221s 00:08:53.779 user 0m7.488s 00:08:53.779 sys 0m0.904s 00:08:53.779 17:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.779 17:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.779 ************************************ 00:08:53.779 END TEST raid_superblock_test 00:08:53.779 ************************************ 00:08:53.779 17:25:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:53.779 17:25:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:53.779 17:25:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.779 17:25:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.779 ************************************ 00:08:53.779 START TEST raid_read_error_test 00:08:53.779 ************************************ 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SQG4WC4K7p 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65340 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65340 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65340 ']' 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.779 17:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.779 [2024-12-07 17:25:27.013394] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:53.779 [2024-12-07 17:25:27.013503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65340 ] 00:08:54.038 [2024-12-07 17:25:27.188392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.038 [2024-12-07 17:25:27.297600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.301 [2024-12-07 17:25:27.491260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.301 [2024-12-07 17:25:27.491366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.563 BaseBdev1_malloc 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.563 true 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.563 [2024-12-07 17:25:27.896326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:54.563 [2024-12-07 17:25:27.896393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.563 [2024-12-07 17:25:27.896412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:54.563 [2024-12-07 17:25:27.896422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.563 [2024-12-07 17:25:27.898392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.563 [2024-12-07 17:25:27.898432] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:54.563 BaseBdev1 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.563 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 BaseBdev2_malloc 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 true 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 [2024-12-07 17:25:27.961899] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:54.823 [2024-12-07 17:25:27.961975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.823 [2024-12-07 17:25:27.962008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:54.823 [2024-12-07 17:25:27.962019] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.823 [2024-12-07 17:25:27.964092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.823 [2024-12-07 17:25:27.964130] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:54.823 BaseBdev2 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.823 17:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 BaseBdev3_malloc 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 true 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 [2024-12-07 17:25:28.039700] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:54.823 [2024-12-07 17:25:28.039858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.823 [2024-12-07 17:25:28.039883] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:54.823 [2024-12-07 17:25:28.039896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.823 [2024-12-07 17:25:28.042047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.823 [2024-12-07 17:25:28.042085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:54.823 BaseBdev3 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.823 [2024-12-07 17:25:28.051757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.823 [2024-12-07 17:25:28.053608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.823 [2024-12-07 17:25:28.053679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.823 [2024-12-07 17:25:28.053871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:54.823 [2024-12-07 17:25:28.053886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.823 [2024-12-07 17:25:28.054258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:54.823 [2024-12-07 17:25:28.054478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:54.823 [2024-12-07 17:25:28.054535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:54.823 [2024-12-07 17:25:28.054748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.823 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.824 "name": "raid_bdev1", 00:08:54.824 "uuid": "57450bd7-2cf6-422a-b2ca-b19466df00ec", 00:08:54.824 "strip_size_kb": 64, 00:08:54.824 "state": "online", 00:08:54.824 "raid_level": "raid0", 00:08:54.824 "superblock": true, 00:08:54.824 "num_base_bdevs": 3, 00:08:54.824 "num_base_bdevs_discovered": 3, 00:08:54.824 "num_base_bdevs_operational": 3, 00:08:54.824 "base_bdevs_list": [ 00:08:54.824 { 00:08:54.824 "name": "BaseBdev1", 00:08:54.824 "uuid": "061f0f8b-0689-547b-870d-6ab41648bf07", 00:08:54.824 "is_configured": true, 00:08:54.824 "data_offset": 2048, 00:08:54.824 "data_size": 63488 00:08:54.824 }, 00:08:54.824 { 00:08:54.824 "name": "BaseBdev2", 00:08:54.824 "uuid": "4a576a16-12c7-52a5-94cd-6ea23f288530", 00:08:54.824 "is_configured": true, 00:08:54.824 "data_offset": 2048, 00:08:54.824 "data_size": 63488 00:08:54.824 }, 00:08:54.824 { 00:08:54.824 "name": "BaseBdev3", 00:08:54.824 "uuid": "b470f4bd-fad5-57ea-9a6d-c88476cb97b8", 00:08:54.824 "is_configured": true, 00:08:54.824 "data_offset": 2048, 00:08:54.824 "data_size": 63488 00:08:54.824 } 00:08:54.824 ] 00:08:54.824 }' 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.824 17:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.393 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:55.393 17:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:55.393 [2024-12-07 17:25:28.552365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.331 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.331 "name": "raid_bdev1", 00:08:56.331 "uuid": "57450bd7-2cf6-422a-b2ca-b19466df00ec", 00:08:56.331 "strip_size_kb": 64, 00:08:56.332 "state": "online", 00:08:56.332 "raid_level": "raid0", 00:08:56.332 "superblock": true, 00:08:56.332 "num_base_bdevs": 3, 00:08:56.332 "num_base_bdevs_discovered": 3, 00:08:56.332 "num_base_bdevs_operational": 3, 00:08:56.332 "base_bdevs_list": [ 00:08:56.332 { 00:08:56.332 "name": "BaseBdev1", 00:08:56.332 "uuid": "061f0f8b-0689-547b-870d-6ab41648bf07", 00:08:56.332 "is_configured": true, 00:08:56.332 "data_offset": 2048, 00:08:56.332 "data_size": 63488 00:08:56.332 }, 00:08:56.332 { 00:08:56.332 "name": "BaseBdev2", 00:08:56.332 "uuid": "4a576a16-12c7-52a5-94cd-6ea23f288530", 00:08:56.332 "is_configured": true, 00:08:56.332 "data_offset": 2048, 00:08:56.332 "data_size": 63488 00:08:56.332 }, 00:08:56.332 { 00:08:56.332 "name": "BaseBdev3", 00:08:56.332 "uuid": "b470f4bd-fad5-57ea-9a6d-c88476cb97b8", 00:08:56.332 "is_configured": true, 00:08:56.332 "data_offset": 2048, 00:08:56.332 "data_size": 63488 00:08:56.332 } 00:08:56.332 ] 00:08:56.332 }' 00:08:56.332 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.332 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.591 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:56.591 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.591 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.591 [2024-12-07 17:25:29.932035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.591 [2024-12-07 17:25:29.932171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.591 [2024-12-07 17:25:29.935015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.591 [2024-12-07 17:25:29.935083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.591 [2024-12-07 17:25:29.935122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.591 [2024-12-07 17:25:29.935132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:56.591 { 00:08:56.591 "results": [ 00:08:56.591 { 00:08:56.591 "job": "raid_bdev1", 00:08:56.591 "core_mask": "0x1", 00:08:56.591 "workload": "randrw", 00:08:56.591 "percentage": 50, 00:08:56.591 "status": "finished", 00:08:56.591 "queue_depth": 1, 00:08:56.591 "io_size": 131072, 00:08:56.591 "runtime": 1.380758, 00:08:56.591 "iops": 15960.074104223911, 00:08:56.591 "mibps": 1995.009263027989, 00:08:56.591 "io_failed": 1, 00:08:56.591 "io_timeout": 0, 00:08:56.591 "avg_latency_us": 86.94015410460139, 00:08:56.591 "min_latency_us": 24.705676855895195, 00:08:56.591 "max_latency_us": 1359.3711790393013 00:08:56.591 } 00:08:56.591 ], 00:08:56.591 "core_count": 1 00:08:56.591 } 00:08:56.591 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.591 17:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65340 00:08:56.591 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65340 ']' 00:08:56.591 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65340 00:08:56.591 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:56.591 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.591 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65340 00:08:56.851 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.851 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.851 killing process with pid 65340 00:08:56.851 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65340' 00:08:56.851 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65340 00:08:56.851 [2024-12-07 17:25:29.982832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.851 17:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65340 00:08:56.851 [2024-12-07 17:25:30.208057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.230 17:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SQG4WC4K7p 00:08:58.230 17:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:58.230 17:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:58.230 17:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:58.230 17:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:58.230 17:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.230 17:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:58.230 17:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:58.230 00:08:58.230 real 0m4.465s 00:08:58.230 user 0m5.244s 00:08:58.230 sys 0m0.589s 00:08:58.230 17:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.230 17:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 ************************************ 00:08:58.230 END TEST raid_read_error_test 00:08:58.230 ************************************ 00:08:58.230 17:25:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:58.230 17:25:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:58.230 17:25:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.230 17:25:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 ************************************ 00:08:58.230 START TEST raid_write_error_test 00:08:58.230 ************************************ 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.H8NwKljs2o 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65485 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65485 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65485 ']' 00:08:58.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.230 17:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 [2024-12-07 17:25:31.557178] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:58.230 [2024-12-07 17:25:31.557301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65485 ] 00:08:58.490 [2024-12-07 17:25:31.734652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.490 [2024-12-07 17:25:31.842576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.750 [2024-12-07 17:25:32.030832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.750 [2024-12-07 17:25:32.030887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.010 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.010 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:59.010 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.010 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:59.010 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.010 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.270 BaseBdev1_malloc 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.270 true 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.270 [2024-12-07 17:25:32.414897] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:59.270 [2024-12-07 17:25:32.414971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.270 [2024-12-07 17:25:32.414991] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:59.270 [2024-12-07 17:25:32.415002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.270 [2024-12-07 17:25:32.417101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.270 [2024-12-07 17:25:32.417196] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:59.270 BaseBdev1 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.270 BaseBdev2_malloc 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.270 true 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.270 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.270 [2024-12-07 17:25:32.480829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:59.271 [2024-12-07 17:25:32.480888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.271 [2024-12-07 17:25:32.480903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:59.271 [2024-12-07 17:25:32.480912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.271 [2024-12-07 17:25:32.482954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.271 [2024-12-07 17:25:32.482989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:59.271 BaseBdev2 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.271 BaseBdev3_malloc 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.271 true 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.271 [2024-12-07 17:25:32.580496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:59.271 [2024-12-07 17:25:32.580553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.271 [2024-12-07 17:25:32.580586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:59.271 [2024-12-07 17:25:32.580596] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.271 [2024-12-07 17:25:32.582602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.271 [2024-12-07 17:25:32.582639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:59.271 BaseBdev3 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.271 [2024-12-07 17:25:32.592554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.271 [2024-12-07 17:25:32.594291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.271 [2024-12-07 17:25:32.594360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.271 [2024-12-07 17:25:32.594552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:59.271 [2024-12-07 17:25:32.594566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.271 [2024-12-07 17:25:32.594797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:59.271 [2024-12-07 17:25:32.594972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:59.271 [2024-12-07 17:25:32.594987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:59.271 [2024-12-07 17:25:32.595151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.271 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.531 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.531 "name": "raid_bdev1", 00:08:59.531 "uuid": "174f4617-4e7e-462f-8b8d-e95c9e308e8a", 00:08:59.531 "strip_size_kb": 64, 00:08:59.531 "state": "online", 00:08:59.531 "raid_level": "raid0", 00:08:59.531 "superblock": true, 00:08:59.531 "num_base_bdevs": 3, 00:08:59.531 "num_base_bdevs_discovered": 3, 00:08:59.531 "num_base_bdevs_operational": 3, 00:08:59.531 "base_bdevs_list": [ 00:08:59.531 { 00:08:59.531 "name": "BaseBdev1", 00:08:59.531 "uuid": "b72f72a6-ed3f-55d0-8cb8-c323b7f1d654", 00:08:59.531 "is_configured": true, 00:08:59.531 "data_offset": 2048, 00:08:59.531 "data_size": 63488 00:08:59.531 }, 00:08:59.531 { 00:08:59.531 "name": "BaseBdev2", 00:08:59.531 "uuid": "f23b56f0-6c0a-5300-b842-4a0e97014d3d", 00:08:59.531 "is_configured": true, 00:08:59.531 "data_offset": 2048, 00:08:59.531 "data_size": 63488 00:08:59.531 }, 00:08:59.531 { 00:08:59.531 "name": "BaseBdev3", 00:08:59.531 "uuid": "16b45933-09cc-5193-ba75-d9d4d0fd3014", 00:08:59.531 "is_configured": true, 00:08:59.531 "data_offset": 2048, 00:08:59.531 "data_size": 63488 00:08:59.531 } 00:08:59.531 ] 00:08:59.531 }' 00:08:59.531 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.531 17:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:59.791 17:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:59.791 [2024-12-07 17:25:33.080929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.731 17:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.731 "name": "raid_bdev1", 00:09:00.731 "uuid": "174f4617-4e7e-462f-8b8d-e95c9e308e8a", 00:09:00.731 "strip_size_kb": 64, 00:09:00.731 "state": "online", 00:09:00.731 "raid_level": "raid0", 00:09:00.731 "superblock": true, 00:09:00.731 "num_base_bdevs": 3, 00:09:00.731 "num_base_bdevs_discovered": 3, 00:09:00.731 "num_base_bdevs_operational": 3, 00:09:00.731 "base_bdevs_list": [ 00:09:00.731 { 00:09:00.731 "name": "BaseBdev1", 00:09:00.731 "uuid": "b72f72a6-ed3f-55d0-8cb8-c323b7f1d654", 00:09:00.731 "is_configured": true, 00:09:00.731 "data_offset": 2048, 00:09:00.731 "data_size": 63488 00:09:00.731 }, 00:09:00.731 { 00:09:00.731 "name": "BaseBdev2", 00:09:00.731 "uuid": "f23b56f0-6c0a-5300-b842-4a0e97014d3d", 00:09:00.731 "is_configured": true, 00:09:00.731 "data_offset": 2048, 00:09:00.731 "data_size": 63488 00:09:00.731 }, 00:09:00.731 { 00:09:00.731 "name": "BaseBdev3", 00:09:00.731 "uuid": "16b45933-09cc-5193-ba75-d9d4d0fd3014", 00:09:00.731 "is_configured": true, 00:09:00.731 "data_offset": 2048, 00:09:00.731 "data_size": 63488 00:09:00.731 } 00:09:00.731 ] 00:09:00.731 }' 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.731 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.302 [2024-12-07 17:25:34.404152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.302 [2024-12-07 17:25:34.404284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.302 [2024-12-07 17:25:34.406900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.302 [2024-12-07 17:25:34.407001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.302 [2024-12-07 17:25:34.407083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.302 [2024-12-07 17:25:34.407128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:01.302 { 00:09:01.302 "results": [ 00:09:01.302 { 00:09:01.302 "job": "raid_bdev1", 00:09:01.302 "core_mask": "0x1", 00:09:01.302 "workload": "randrw", 00:09:01.302 "percentage": 50, 00:09:01.302 "status": "finished", 00:09:01.302 "queue_depth": 1, 00:09:01.302 "io_size": 131072, 00:09:01.302 "runtime": 1.324285, 00:09:01.302 "iops": 16243.482331975369, 00:09:01.302 "mibps": 2030.435291496921, 00:09:01.302 "io_failed": 1, 00:09:01.302 "io_timeout": 0, 00:09:01.302 "avg_latency_us": 85.39535191894521, 00:09:01.302 "min_latency_us": 25.152838427947597, 00:09:01.302 "max_latency_us": 1380.8349344978167 00:09:01.302 } 00:09:01.302 ], 00:09:01.302 "core_count": 1 00:09:01.302 } 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65485 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65485 ']' 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65485 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65485 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.302 killing process with pid 65485 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65485' 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65485 00:09:01.302 [2024-12-07 17:25:34.442230] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.302 17:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65485 00:09:01.302 [2024-12-07 17:25:34.661194] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.682 17:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.H8NwKljs2o 00:09:02.682 17:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:02.682 17:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:02.682 17:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:02.682 17:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:02.682 ************************************ 00:09:02.682 END TEST raid_write_error_test 00:09:02.682 ************************************ 00:09:02.682 17:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.682 17:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.682 17:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:02.682 00:09:02.682 real 0m4.364s 00:09:02.682 user 0m5.096s 00:09:02.682 sys 0m0.557s 00:09:02.682 17:25:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.682 17:25:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.682 17:25:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:02.682 17:25:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:02.682 17:25:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.682 17:25:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.682 17:25:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.682 ************************************ 00:09:02.682 START TEST raid_state_function_test 00:09:02.682 ************************************ 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65629 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65629' 00:09:02.682 Process raid pid: 65629 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65629 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65629 ']' 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.682 17:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.682 [2024-12-07 17:25:35.982103] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:02.682 [2024-12-07 17:25:35.982328] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.941 [2024-12-07 17:25:36.151619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.941 [2024-12-07 17:25:36.261301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.201 [2024-12-07 17:25:36.453078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.201 [2024-12-07 17:25:36.453201] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.460 [2024-12-07 17:25:36.816816] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.460 [2024-12-07 17:25:36.816872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.460 [2024-12-07 17:25:36.816883] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.460 [2024-12-07 17:25:36.816893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.460 [2024-12-07 17:25:36.816899] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.460 [2024-12-07 17:25:36.816907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.460 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.461 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.461 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.461 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.461 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.461 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.461 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.720 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.720 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.720 "name": "Existed_Raid", 00:09:03.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.720 "strip_size_kb": 64, 00:09:03.720 "state": "configuring", 00:09:03.720 "raid_level": "concat", 00:09:03.720 "superblock": false, 00:09:03.720 "num_base_bdevs": 3, 00:09:03.720 "num_base_bdevs_discovered": 0, 00:09:03.720 "num_base_bdevs_operational": 3, 00:09:03.720 "base_bdevs_list": [ 00:09:03.720 { 00:09:03.720 "name": "BaseBdev1", 00:09:03.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.720 "is_configured": false, 00:09:03.720 "data_offset": 0, 00:09:03.720 "data_size": 0 00:09:03.720 }, 00:09:03.720 { 00:09:03.720 "name": "BaseBdev2", 00:09:03.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.720 "is_configured": false, 00:09:03.720 "data_offset": 0, 00:09:03.720 "data_size": 0 00:09:03.720 }, 00:09:03.720 { 00:09:03.720 "name": "BaseBdev3", 00:09:03.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.720 "is_configured": false, 00:09:03.720 "data_offset": 0, 00:09:03.720 "data_size": 0 00:09:03.720 } 00:09:03.720 ] 00:09:03.720 }' 00:09:03.720 17:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.720 17:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 [2024-12-07 17:25:37.216067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.981 [2024-12-07 17:25:37.216154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 [2024-12-07 17:25:37.228076] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.981 [2024-12-07 17:25:37.228156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.981 [2024-12-07 17:25:37.228183] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.981 [2024-12-07 17:25:37.228206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.981 [2024-12-07 17:25:37.228223] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.981 [2024-12-07 17:25:37.228243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 [2024-12-07 17:25:37.273616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.981 BaseBdev1 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 [ 00:09:03.981 { 00:09:03.981 "name": "BaseBdev1", 00:09:03.981 "aliases": [ 00:09:03.981 "3840f641-65ce-4049-b971-b96116cd0861" 00:09:03.981 ], 00:09:03.981 "product_name": "Malloc disk", 00:09:03.981 "block_size": 512, 00:09:03.981 "num_blocks": 65536, 00:09:03.981 "uuid": "3840f641-65ce-4049-b971-b96116cd0861", 00:09:03.981 "assigned_rate_limits": { 00:09:03.981 "rw_ios_per_sec": 0, 00:09:03.981 "rw_mbytes_per_sec": 0, 00:09:03.981 "r_mbytes_per_sec": 0, 00:09:03.981 "w_mbytes_per_sec": 0 00:09:03.981 }, 00:09:03.981 "claimed": true, 00:09:03.981 "claim_type": "exclusive_write", 00:09:03.981 "zoned": false, 00:09:03.981 "supported_io_types": { 00:09:03.981 "read": true, 00:09:03.981 "write": true, 00:09:03.981 "unmap": true, 00:09:03.981 "flush": true, 00:09:03.981 "reset": true, 00:09:03.981 "nvme_admin": false, 00:09:03.981 "nvme_io": false, 00:09:03.981 "nvme_io_md": false, 00:09:03.981 "write_zeroes": true, 00:09:03.981 "zcopy": true, 00:09:03.981 "get_zone_info": false, 00:09:03.981 "zone_management": false, 00:09:03.981 "zone_append": false, 00:09:03.981 "compare": false, 00:09:03.981 "compare_and_write": false, 00:09:03.981 "abort": true, 00:09:03.981 "seek_hole": false, 00:09:03.981 "seek_data": false, 00:09:03.981 "copy": true, 00:09:03.981 "nvme_iov_md": false 00:09:03.981 }, 00:09:03.981 "memory_domains": [ 00:09:03.981 { 00:09:03.981 "dma_device_id": "system", 00:09:03.981 "dma_device_type": 1 00:09:03.981 }, 00:09:03.981 { 00:09:03.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.981 "dma_device_type": 2 00:09:03.981 } 00:09:03.981 ], 00:09:03.981 "driver_specific": {} 00:09:03.981 } 00:09:03.981 ] 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.241 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.241 "name": "Existed_Raid", 00:09:04.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.241 "strip_size_kb": 64, 00:09:04.241 "state": "configuring", 00:09:04.241 "raid_level": "concat", 00:09:04.241 "superblock": false, 00:09:04.241 "num_base_bdevs": 3, 00:09:04.241 "num_base_bdevs_discovered": 1, 00:09:04.241 "num_base_bdevs_operational": 3, 00:09:04.241 "base_bdevs_list": [ 00:09:04.241 { 00:09:04.241 "name": "BaseBdev1", 00:09:04.241 "uuid": "3840f641-65ce-4049-b971-b96116cd0861", 00:09:04.241 "is_configured": true, 00:09:04.241 "data_offset": 0, 00:09:04.241 "data_size": 65536 00:09:04.241 }, 00:09:04.241 { 00:09:04.241 "name": "BaseBdev2", 00:09:04.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.241 "is_configured": false, 00:09:04.241 "data_offset": 0, 00:09:04.241 "data_size": 0 00:09:04.241 }, 00:09:04.241 { 00:09:04.241 "name": "BaseBdev3", 00:09:04.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.241 "is_configured": false, 00:09:04.241 "data_offset": 0, 00:09:04.241 "data_size": 0 00:09:04.241 } 00:09:04.241 ] 00:09:04.241 }' 00:09:04.241 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.241 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.501 [2024-12-07 17:25:37.756838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.501 [2024-12-07 17:25:37.756895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.501 [2024-12-07 17:25:37.768852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.501 [2024-12-07 17:25:37.770655] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.501 [2024-12-07 17:25:37.770694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.501 [2024-12-07 17:25:37.770704] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.501 [2024-12-07 17:25:37.770712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.501 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.502 "name": "Existed_Raid", 00:09:04.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.502 "strip_size_kb": 64, 00:09:04.502 "state": "configuring", 00:09:04.502 "raid_level": "concat", 00:09:04.502 "superblock": false, 00:09:04.502 "num_base_bdevs": 3, 00:09:04.502 "num_base_bdevs_discovered": 1, 00:09:04.502 "num_base_bdevs_operational": 3, 00:09:04.502 "base_bdevs_list": [ 00:09:04.502 { 00:09:04.502 "name": "BaseBdev1", 00:09:04.502 "uuid": "3840f641-65ce-4049-b971-b96116cd0861", 00:09:04.502 "is_configured": true, 00:09:04.502 "data_offset": 0, 00:09:04.502 "data_size": 65536 00:09:04.502 }, 00:09:04.502 { 00:09:04.502 "name": "BaseBdev2", 00:09:04.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.502 "is_configured": false, 00:09:04.502 "data_offset": 0, 00:09:04.502 "data_size": 0 00:09:04.502 }, 00:09:04.502 { 00:09:04.502 "name": "BaseBdev3", 00:09:04.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.502 "is_configured": false, 00:09:04.502 "data_offset": 0, 00:09:04.502 "data_size": 0 00:09:04.502 } 00:09:04.502 ] 00:09:04.502 }' 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.502 17:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.071 [2024-12-07 17:25:38.244410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.071 BaseBdev2 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.071 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.071 [ 00:09:05.071 { 00:09:05.071 "name": "BaseBdev2", 00:09:05.071 "aliases": [ 00:09:05.071 "d289646d-be92-4799-bf58-95126a3fc0f5" 00:09:05.071 ], 00:09:05.071 "product_name": "Malloc disk", 00:09:05.071 "block_size": 512, 00:09:05.071 "num_blocks": 65536, 00:09:05.071 "uuid": "d289646d-be92-4799-bf58-95126a3fc0f5", 00:09:05.071 "assigned_rate_limits": { 00:09:05.071 "rw_ios_per_sec": 0, 00:09:05.071 "rw_mbytes_per_sec": 0, 00:09:05.071 "r_mbytes_per_sec": 0, 00:09:05.071 "w_mbytes_per_sec": 0 00:09:05.071 }, 00:09:05.071 "claimed": true, 00:09:05.071 "claim_type": "exclusive_write", 00:09:05.071 "zoned": false, 00:09:05.071 "supported_io_types": { 00:09:05.071 "read": true, 00:09:05.071 "write": true, 00:09:05.071 "unmap": true, 00:09:05.071 "flush": true, 00:09:05.071 "reset": true, 00:09:05.071 "nvme_admin": false, 00:09:05.071 "nvme_io": false, 00:09:05.071 "nvme_io_md": false, 00:09:05.071 "write_zeroes": true, 00:09:05.071 "zcopy": true, 00:09:05.071 "get_zone_info": false, 00:09:05.071 "zone_management": false, 00:09:05.071 "zone_append": false, 00:09:05.071 "compare": false, 00:09:05.071 "compare_and_write": false, 00:09:05.071 "abort": true, 00:09:05.071 "seek_hole": false, 00:09:05.071 "seek_data": false, 00:09:05.071 "copy": true, 00:09:05.071 "nvme_iov_md": false 00:09:05.071 }, 00:09:05.071 "memory_domains": [ 00:09:05.071 { 00:09:05.071 "dma_device_id": "system", 00:09:05.071 "dma_device_type": 1 00:09:05.072 }, 00:09:05.072 { 00:09:05.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.072 "dma_device_type": 2 00:09:05.072 } 00:09:05.072 ], 00:09:05.072 "driver_specific": {} 00:09:05.072 } 00:09:05.072 ] 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.072 "name": "Existed_Raid", 00:09:05.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.072 "strip_size_kb": 64, 00:09:05.072 "state": "configuring", 00:09:05.072 "raid_level": "concat", 00:09:05.072 "superblock": false, 00:09:05.072 "num_base_bdevs": 3, 00:09:05.072 "num_base_bdevs_discovered": 2, 00:09:05.072 "num_base_bdevs_operational": 3, 00:09:05.072 "base_bdevs_list": [ 00:09:05.072 { 00:09:05.072 "name": "BaseBdev1", 00:09:05.072 "uuid": "3840f641-65ce-4049-b971-b96116cd0861", 00:09:05.072 "is_configured": true, 00:09:05.072 "data_offset": 0, 00:09:05.072 "data_size": 65536 00:09:05.072 }, 00:09:05.072 { 00:09:05.072 "name": "BaseBdev2", 00:09:05.072 "uuid": "d289646d-be92-4799-bf58-95126a3fc0f5", 00:09:05.072 "is_configured": true, 00:09:05.072 "data_offset": 0, 00:09:05.072 "data_size": 65536 00:09:05.072 }, 00:09:05.072 { 00:09:05.072 "name": "BaseBdev3", 00:09:05.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.072 "is_configured": false, 00:09:05.072 "data_offset": 0, 00:09:05.072 "data_size": 0 00:09:05.072 } 00:09:05.072 ] 00:09:05.072 }' 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.072 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.642 [2024-12-07 17:25:38.791553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.642 [2024-12-07 17:25:38.791594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:05.642 [2024-12-07 17:25:38.791607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:05.642 [2024-12-07 17:25:38.791885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:05.642 [2024-12-07 17:25:38.792092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:05.642 [2024-12-07 17:25:38.792111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:05.642 [2024-12-07 17:25:38.792398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.642 BaseBdev3 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.642 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.642 [ 00:09:05.642 { 00:09:05.642 "name": "BaseBdev3", 00:09:05.642 "aliases": [ 00:09:05.642 "c00ea7e8-4ecf-47c9-aba1-fad2ba455f6e" 00:09:05.642 ], 00:09:05.642 "product_name": "Malloc disk", 00:09:05.642 "block_size": 512, 00:09:05.642 "num_blocks": 65536, 00:09:05.642 "uuid": "c00ea7e8-4ecf-47c9-aba1-fad2ba455f6e", 00:09:05.642 "assigned_rate_limits": { 00:09:05.642 "rw_ios_per_sec": 0, 00:09:05.642 "rw_mbytes_per_sec": 0, 00:09:05.642 "r_mbytes_per_sec": 0, 00:09:05.642 "w_mbytes_per_sec": 0 00:09:05.642 }, 00:09:05.642 "claimed": true, 00:09:05.642 "claim_type": "exclusive_write", 00:09:05.642 "zoned": false, 00:09:05.642 "supported_io_types": { 00:09:05.642 "read": true, 00:09:05.642 "write": true, 00:09:05.642 "unmap": true, 00:09:05.642 "flush": true, 00:09:05.642 "reset": true, 00:09:05.642 "nvme_admin": false, 00:09:05.642 "nvme_io": false, 00:09:05.642 "nvme_io_md": false, 00:09:05.643 "write_zeroes": true, 00:09:05.643 "zcopy": true, 00:09:05.643 "get_zone_info": false, 00:09:05.643 "zone_management": false, 00:09:05.643 "zone_append": false, 00:09:05.643 "compare": false, 00:09:05.643 "compare_and_write": false, 00:09:05.643 "abort": true, 00:09:05.643 "seek_hole": false, 00:09:05.643 "seek_data": false, 00:09:05.643 "copy": true, 00:09:05.643 "nvme_iov_md": false 00:09:05.643 }, 00:09:05.643 "memory_domains": [ 00:09:05.643 { 00:09:05.643 "dma_device_id": "system", 00:09:05.643 "dma_device_type": 1 00:09:05.643 }, 00:09:05.643 { 00:09:05.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.643 "dma_device_type": 2 00:09:05.643 } 00:09:05.643 ], 00:09:05.643 "driver_specific": {} 00:09:05.643 } 00:09:05.643 ] 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.643 "name": "Existed_Raid", 00:09:05.643 "uuid": "a0e8609c-c123-4093-9d1e-b8a3ebe3bf9f", 00:09:05.643 "strip_size_kb": 64, 00:09:05.643 "state": "online", 00:09:05.643 "raid_level": "concat", 00:09:05.643 "superblock": false, 00:09:05.643 "num_base_bdevs": 3, 00:09:05.643 "num_base_bdevs_discovered": 3, 00:09:05.643 "num_base_bdevs_operational": 3, 00:09:05.643 "base_bdevs_list": [ 00:09:05.643 { 00:09:05.643 "name": "BaseBdev1", 00:09:05.643 "uuid": "3840f641-65ce-4049-b971-b96116cd0861", 00:09:05.643 "is_configured": true, 00:09:05.643 "data_offset": 0, 00:09:05.643 "data_size": 65536 00:09:05.643 }, 00:09:05.643 { 00:09:05.643 "name": "BaseBdev2", 00:09:05.643 "uuid": "d289646d-be92-4799-bf58-95126a3fc0f5", 00:09:05.643 "is_configured": true, 00:09:05.643 "data_offset": 0, 00:09:05.643 "data_size": 65536 00:09:05.643 }, 00:09:05.643 { 00:09:05.643 "name": "BaseBdev3", 00:09:05.643 "uuid": "c00ea7e8-4ecf-47c9-aba1-fad2ba455f6e", 00:09:05.643 "is_configured": true, 00:09:05.643 "data_offset": 0, 00:09:05.643 "data_size": 65536 00:09:05.643 } 00:09:05.643 ] 00:09:05.643 }' 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.643 17:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.903 [2024-12-07 17:25:39.259137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.903 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.163 "name": "Existed_Raid", 00:09:06.163 "aliases": [ 00:09:06.163 "a0e8609c-c123-4093-9d1e-b8a3ebe3bf9f" 00:09:06.163 ], 00:09:06.163 "product_name": "Raid Volume", 00:09:06.163 "block_size": 512, 00:09:06.163 "num_blocks": 196608, 00:09:06.163 "uuid": "a0e8609c-c123-4093-9d1e-b8a3ebe3bf9f", 00:09:06.163 "assigned_rate_limits": { 00:09:06.163 "rw_ios_per_sec": 0, 00:09:06.163 "rw_mbytes_per_sec": 0, 00:09:06.163 "r_mbytes_per_sec": 0, 00:09:06.163 "w_mbytes_per_sec": 0 00:09:06.163 }, 00:09:06.163 "claimed": false, 00:09:06.163 "zoned": false, 00:09:06.163 "supported_io_types": { 00:09:06.163 "read": true, 00:09:06.163 "write": true, 00:09:06.163 "unmap": true, 00:09:06.163 "flush": true, 00:09:06.163 "reset": true, 00:09:06.163 "nvme_admin": false, 00:09:06.163 "nvme_io": false, 00:09:06.163 "nvme_io_md": false, 00:09:06.163 "write_zeroes": true, 00:09:06.163 "zcopy": false, 00:09:06.163 "get_zone_info": false, 00:09:06.163 "zone_management": false, 00:09:06.163 "zone_append": false, 00:09:06.163 "compare": false, 00:09:06.163 "compare_and_write": false, 00:09:06.163 "abort": false, 00:09:06.163 "seek_hole": false, 00:09:06.163 "seek_data": false, 00:09:06.163 "copy": false, 00:09:06.163 "nvme_iov_md": false 00:09:06.163 }, 00:09:06.163 "memory_domains": [ 00:09:06.163 { 00:09:06.163 "dma_device_id": "system", 00:09:06.163 "dma_device_type": 1 00:09:06.163 }, 00:09:06.163 { 00:09:06.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.163 "dma_device_type": 2 00:09:06.163 }, 00:09:06.163 { 00:09:06.163 "dma_device_id": "system", 00:09:06.163 "dma_device_type": 1 00:09:06.163 }, 00:09:06.163 { 00:09:06.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.163 "dma_device_type": 2 00:09:06.163 }, 00:09:06.163 { 00:09:06.163 "dma_device_id": "system", 00:09:06.163 "dma_device_type": 1 00:09:06.163 }, 00:09:06.163 { 00:09:06.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.163 "dma_device_type": 2 00:09:06.163 } 00:09:06.163 ], 00:09:06.163 "driver_specific": { 00:09:06.163 "raid": { 00:09:06.163 "uuid": "a0e8609c-c123-4093-9d1e-b8a3ebe3bf9f", 00:09:06.163 "strip_size_kb": 64, 00:09:06.163 "state": "online", 00:09:06.163 "raid_level": "concat", 00:09:06.163 "superblock": false, 00:09:06.163 "num_base_bdevs": 3, 00:09:06.163 "num_base_bdevs_discovered": 3, 00:09:06.163 "num_base_bdevs_operational": 3, 00:09:06.163 "base_bdevs_list": [ 00:09:06.163 { 00:09:06.163 "name": "BaseBdev1", 00:09:06.163 "uuid": "3840f641-65ce-4049-b971-b96116cd0861", 00:09:06.163 "is_configured": true, 00:09:06.163 "data_offset": 0, 00:09:06.163 "data_size": 65536 00:09:06.163 }, 00:09:06.163 { 00:09:06.163 "name": "BaseBdev2", 00:09:06.163 "uuid": "d289646d-be92-4799-bf58-95126a3fc0f5", 00:09:06.163 "is_configured": true, 00:09:06.163 "data_offset": 0, 00:09:06.163 "data_size": 65536 00:09:06.163 }, 00:09:06.163 { 00:09:06.163 "name": "BaseBdev3", 00:09:06.163 "uuid": "c00ea7e8-4ecf-47c9-aba1-fad2ba455f6e", 00:09:06.163 "is_configured": true, 00:09:06.163 "data_offset": 0, 00:09:06.163 "data_size": 65536 00:09:06.163 } 00:09:06.163 ] 00:09:06.163 } 00:09:06.163 } 00:09:06.163 }' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:06.163 BaseBdev2 00:09:06.163 BaseBdev3' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.163 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.163 [2024-12-07 17:25:39.498450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.163 [2024-12-07 17:25:39.498479] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.163 [2024-12-07 17:25:39.498531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.423 "name": "Existed_Raid", 00:09:06.423 "uuid": "a0e8609c-c123-4093-9d1e-b8a3ebe3bf9f", 00:09:06.423 "strip_size_kb": 64, 00:09:06.423 "state": "offline", 00:09:06.423 "raid_level": "concat", 00:09:06.423 "superblock": false, 00:09:06.423 "num_base_bdevs": 3, 00:09:06.423 "num_base_bdevs_discovered": 2, 00:09:06.423 "num_base_bdevs_operational": 2, 00:09:06.423 "base_bdevs_list": [ 00:09:06.423 { 00:09:06.423 "name": null, 00:09:06.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.423 "is_configured": false, 00:09:06.423 "data_offset": 0, 00:09:06.423 "data_size": 65536 00:09:06.423 }, 00:09:06.423 { 00:09:06.423 "name": "BaseBdev2", 00:09:06.423 "uuid": "d289646d-be92-4799-bf58-95126a3fc0f5", 00:09:06.423 "is_configured": true, 00:09:06.423 "data_offset": 0, 00:09:06.423 "data_size": 65536 00:09:06.423 }, 00:09:06.423 { 00:09:06.423 "name": "BaseBdev3", 00:09:06.423 "uuid": "c00ea7e8-4ecf-47c9-aba1-fad2ba455f6e", 00:09:06.423 "is_configured": true, 00:09:06.423 "data_offset": 0, 00:09:06.423 "data_size": 65536 00:09:06.423 } 00:09:06.423 ] 00:09:06.423 }' 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.423 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.682 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:06.682 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.682 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.682 17:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.682 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.682 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.682 17:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.682 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.682 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.682 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:06.682 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.682 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.682 [2024-12-07 17:25:40.031315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:06.940 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.940 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.940 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.941 [2024-12-07 17:25:40.182351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:06.941 [2024-12-07 17:25:40.182455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.941 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.200 BaseBdev2 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.200 [ 00:09:07.200 { 00:09:07.200 "name": "BaseBdev2", 00:09:07.200 "aliases": [ 00:09:07.200 "821be8de-cf49-4d56-bd9f-41f2a61898d8" 00:09:07.200 ], 00:09:07.200 "product_name": "Malloc disk", 00:09:07.200 "block_size": 512, 00:09:07.200 "num_blocks": 65536, 00:09:07.200 "uuid": "821be8de-cf49-4d56-bd9f-41f2a61898d8", 00:09:07.200 "assigned_rate_limits": { 00:09:07.200 "rw_ios_per_sec": 0, 00:09:07.200 "rw_mbytes_per_sec": 0, 00:09:07.200 "r_mbytes_per_sec": 0, 00:09:07.200 "w_mbytes_per_sec": 0 00:09:07.200 }, 00:09:07.200 "claimed": false, 00:09:07.200 "zoned": false, 00:09:07.200 "supported_io_types": { 00:09:07.200 "read": true, 00:09:07.200 "write": true, 00:09:07.200 "unmap": true, 00:09:07.200 "flush": true, 00:09:07.200 "reset": true, 00:09:07.200 "nvme_admin": false, 00:09:07.200 "nvme_io": false, 00:09:07.200 "nvme_io_md": false, 00:09:07.200 "write_zeroes": true, 00:09:07.200 "zcopy": true, 00:09:07.200 "get_zone_info": false, 00:09:07.200 "zone_management": false, 00:09:07.200 "zone_append": false, 00:09:07.200 "compare": false, 00:09:07.200 "compare_and_write": false, 00:09:07.200 "abort": true, 00:09:07.200 "seek_hole": false, 00:09:07.200 "seek_data": false, 00:09:07.200 "copy": true, 00:09:07.200 "nvme_iov_md": false 00:09:07.200 }, 00:09:07.200 "memory_domains": [ 00:09:07.200 { 00:09:07.200 "dma_device_id": "system", 00:09:07.200 "dma_device_type": 1 00:09:07.200 }, 00:09:07.200 { 00:09:07.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.200 "dma_device_type": 2 00:09:07.200 } 00:09:07.200 ], 00:09:07.200 "driver_specific": {} 00:09:07.200 } 00:09:07.200 ] 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.200 BaseBdev3 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:07.200 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.201 [ 00:09:07.201 { 00:09:07.201 "name": "BaseBdev3", 00:09:07.201 "aliases": [ 00:09:07.201 "7ed09f37-192e-4fff-936f-c2b02a4a25e8" 00:09:07.201 ], 00:09:07.201 "product_name": "Malloc disk", 00:09:07.201 "block_size": 512, 00:09:07.201 "num_blocks": 65536, 00:09:07.201 "uuid": "7ed09f37-192e-4fff-936f-c2b02a4a25e8", 00:09:07.201 "assigned_rate_limits": { 00:09:07.201 "rw_ios_per_sec": 0, 00:09:07.201 "rw_mbytes_per_sec": 0, 00:09:07.201 "r_mbytes_per_sec": 0, 00:09:07.201 "w_mbytes_per_sec": 0 00:09:07.201 }, 00:09:07.201 "claimed": false, 00:09:07.201 "zoned": false, 00:09:07.201 "supported_io_types": { 00:09:07.201 "read": true, 00:09:07.201 "write": true, 00:09:07.201 "unmap": true, 00:09:07.201 "flush": true, 00:09:07.201 "reset": true, 00:09:07.201 "nvme_admin": false, 00:09:07.201 "nvme_io": false, 00:09:07.201 "nvme_io_md": false, 00:09:07.201 "write_zeroes": true, 00:09:07.201 "zcopy": true, 00:09:07.201 "get_zone_info": false, 00:09:07.201 "zone_management": false, 00:09:07.201 "zone_append": false, 00:09:07.201 "compare": false, 00:09:07.201 "compare_and_write": false, 00:09:07.201 "abort": true, 00:09:07.201 "seek_hole": false, 00:09:07.201 "seek_data": false, 00:09:07.201 "copy": true, 00:09:07.201 "nvme_iov_md": false 00:09:07.201 }, 00:09:07.201 "memory_domains": [ 00:09:07.201 { 00:09:07.201 "dma_device_id": "system", 00:09:07.201 "dma_device_type": 1 00:09:07.201 }, 00:09:07.201 { 00:09:07.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.201 "dma_device_type": 2 00:09:07.201 } 00:09:07.201 ], 00:09:07.201 "driver_specific": {} 00:09:07.201 } 00:09:07.201 ] 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.201 [2024-12-07 17:25:40.493894] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.201 [2024-12-07 17:25:40.493950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.201 [2024-12-07 17:25:40.493988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.201 [2024-12-07 17:25:40.495764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.201 "name": "Existed_Raid", 00:09:07.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.201 "strip_size_kb": 64, 00:09:07.201 "state": "configuring", 00:09:07.201 "raid_level": "concat", 00:09:07.201 "superblock": false, 00:09:07.201 "num_base_bdevs": 3, 00:09:07.201 "num_base_bdevs_discovered": 2, 00:09:07.201 "num_base_bdevs_operational": 3, 00:09:07.201 "base_bdevs_list": [ 00:09:07.201 { 00:09:07.201 "name": "BaseBdev1", 00:09:07.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.201 "is_configured": false, 00:09:07.201 "data_offset": 0, 00:09:07.201 "data_size": 0 00:09:07.201 }, 00:09:07.201 { 00:09:07.201 "name": "BaseBdev2", 00:09:07.201 "uuid": "821be8de-cf49-4d56-bd9f-41f2a61898d8", 00:09:07.201 "is_configured": true, 00:09:07.201 "data_offset": 0, 00:09:07.201 "data_size": 65536 00:09:07.201 }, 00:09:07.201 { 00:09:07.201 "name": "BaseBdev3", 00:09:07.201 "uuid": "7ed09f37-192e-4fff-936f-c2b02a4a25e8", 00:09:07.201 "is_configured": true, 00:09:07.201 "data_offset": 0, 00:09:07.201 "data_size": 65536 00:09:07.201 } 00:09:07.201 ] 00:09:07.201 }' 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.201 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.769 [2024-12-07 17:25:40.945152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.769 "name": "Existed_Raid", 00:09:07.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.769 "strip_size_kb": 64, 00:09:07.769 "state": "configuring", 00:09:07.769 "raid_level": "concat", 00:09:07.769 "superblock": false, 00:09:07.769 "num_base_bdevs": 3, 00:09:07.769 "num_base_bdevs_discovered": 1, 00:09:07.769 "num_base_bdevs_operational": 3, 00:09:07.769 "base_bdevs_list": [ 00:09:07.769 { 00:09:07.769 "name": "BaseBdev1", 00:09:07.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.769 "is_configured": false, 00:09:07.769 "data_offset": 0, 00:09:07.769 "data_size": 0 00:09:07.769 }, 00:09:07.769 { 00:09:07.769 "name": null, 00:09:07.769 "uuid": "821be8de-cf49-4d56-bd9f-41f2a61898d8", 00:09:07.769 "is_configured": false, 00:09:07.769 "data_offset": 0, 00:09:07.769 "data_size": 65536 00:09:07.769 }, 00:09:07.769 { 00:09:07.769 "name": "BaseBdev3", 00:09:07.769 "uuid": "7ed09f37-192e-4fff-936f-c2b02a4a25e8", 00:09:07.769 "is_configured": true, 00:09:07.769 "data_offset": 0, 00:09:07.769 "data_size": 65536 00:09:07.769 } 00:09:07.769 ] 00:09:07.769 }' 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.769 17:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.338 [2024-12-07 17:25:41.519771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.338 BaseBdev1 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.338 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 [ 00:09:08.339 { 00:09:08.339 "name": "BaseBdev1", 00:09:08.339 "aliases": [ 00:09:08.339 "9d8e66ca-4e78-4660-8879-a20be1cf6249" 00:09:08.339 ], 00:09:08.339 "product_name": "Malloc disk", 00:09:08.339 "block_size": 512, 00:09:08.339 "num_blocks": 65536, 00:09:08.339 "uuid": "9d8e66ca-4e78-4660-8879-a20be1cf6249", 00:09:08.339 "assigned_rate_limits": { 00:09:08.339 "rw_ios_per_sec": 0, 00:09:08.339 "rw_mbytes_per_sec": 0, 00:09:08.339 "r_mbytes_per_sec": 0, 00:09:08.339 "w_mbytes_per_sec": 0 00:09:08.339 }, 00:09:08.339 "claimed": true, 00:09:08.339 "claim_type": "exclusive_write", 00:09:08.339 "zoned": false, 00:09:08.339 "supported_io_types": { 00:09:08.339 "read": true, 00:09:08.339 "write": true, 00:09:08.339 "unmap": true, 00:09:08.339 "flush": true, 00:09:08.339 "reset": true, 00:09:08.339 "nvme_admin": false, 00:09:08.339 "nvme_io": false, 00:09:08.339 "nvme_io_md": false, 00:09:08.339 "write_zeroes": true, 00:09:08.339 "zcopy": true, 00:09:08.339 "get_zone_info": false, 00:09:08.339 "zone_management": false, 00:09:08.339 "zone_append": false, 00:09:08.339 "compare": false, 00:09:08.339 "compare_and_write": false, 00:09:08.339 "abort": true, 00:09:08.339 "seek_hole": false, 00:09:08.339 "seek_data": false, 00:09:08.339 "copy": true, 00:09:08.339 "nvme_iov_md": false 00:09:08.339 }, 00:09:08.339 "memory_domains": [ 00:09:08.339 { 00:09:08.339 "dma_device_id": "system", 00:09:08.339 "dma_device_type": 1 00:09:08.339 }, 00:09:08.339 { 00:09:08.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.339 "dma_device_type": 2 00:09:08.339 } 00:09:08.339 ], 00:09:08.339 "driver_specific": {} 00:09:08.339 } 00:09:08.339 ] 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.339 "name": "Existed_Raid", 00:09:08.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.339 "strip_size_kb": 64, 00:09:08.339 "state": "configuring", 00:09:08.339 "raid_level": "concat", 00:09:08.339 "superblock": false, 00:09:08.339 "num_base_bdevs": 3, 00:09:08.339 "num_base_bdevs_discovered": 2, 00:09:08.339 "num_base_bdevs_operational": 3, 00:09:08.339 "base_bdevs_list": [ 00:09:08.339 { 00:09:08.339 "name": "BaseBdev1", 00:09:08.339 "uuid": "9d8e66ca-4e78-4660-8879-a20be1cf6249", 00:09:08.339 "is_configured": true, 00:09:08.339 "data_offset": 0, 00:09:08.339 "data_size": 65536 00:09:08.339 }, 00:09:08.339 { 00:09:08.339 "name": null, 00:09:08.339 "uuid": "821be8de-cf49-4d56-bd9f-41f2a61898d8", 00:09:08.339 "is_configured": false, 00:09:08.339 "data_offset": 0, 00:09:08.339 "data_size": 65536 00:09:08.339 }, 00:09:08.339 { 00:09:08.339 "name": "BaseBdev3", 00:09:08.339 "uuid": "7ed09f37-192e-4fff-936f-c2b02a4a25e8", 00:09:08.339 "is_configured": true, 00:09:08.339 "data_offset": 0, 00:09:08.339 "data_size": 65536 00:09:08.339 } 00:09:08.339 ] 00:09:08.339 }' 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.339 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.598 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.598 17:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:08.598 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.598 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.858 17:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.858 [2024-12-07 17:25:42.014997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.858 "name": "Existed_Raid", 00:09:08.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.858 "strip_size_kb": 64, 00:09:08.858 "state": "configuring", 00:09:08.858 "raid_level": "concat", 00:09:08.858 "superblock": false, 00:09:08.858 "num_base_bdevs": 3, 00:09:08.858 "num_base_bdevs_discovered": 1, 00:09:08.858 "num_base_bdevs_operational": 3, 00:09:08.858 "base_bdevs_list": [ 00:09:08.858 { 00:09:08.858 "name": "BaseBdev1", 00:09:08.858 "uuid": "9d8e66ca-4e78-4660-8879-a20be1cf6249", 00:09:08.858 "is_configured": true, 00:09:08.858 "data_offset": 0, 00:09:08.858 "data_size": 65536 00:09:08.858 }, 00:09:08.858 { 00:09:08.858 "name": null, 00:09:08.858 "uuid": "821be8de-cf49-4d56-bd9f-41f2a61898d8", 00:09:08.858 "is_configured": false, 00:09:08.858 "data_offset": 0, 00:09:08.858 "data_size": 65536 00:09:08.858 }, 00:09:08.858 { 00:09:08.858 "name": null, 00:09:08.858 "uuid": "7ed09f37-192e-4fff-936f-c2b02a4a25e8", 00:09:08.858 "is_configured": false, 00:09:08.858 "data_offset": 0, 00:09:08.858 "data_size": 65536 00:09:08.858 } 00:09:08.858 ] 00:09:08.858 }' 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.858 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.117 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.117 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:09.117 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.117 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.117 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.377 [2024-12-07 17:25:42.522145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.377 "name": "Existed_Raid", 00:09:09.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.377 "strip_size_kb": 64, 00:09:09.377 "state": "configuring", 00:09:09.377 "raid_level": "concat", 00:09:09.377 "superblock": false, 00:09:09.377 "num_base_bdevs": 3, 00:09:09.377 "num_base_bdevs_discovered": 2, 00:09:09.377 "num_base_bdevs_operational": 3, 00:09:09.377 "base_bdevs_list": [ 00:09:09.377 { 00:09:09.377 "name": "BaseBdev1", 00:09:09.377 "uuid": "9d8e66ca-4e78-4660-8879-a20be1cf6249", 00:09:09.377 "is_configured": true, 00:09:09.377 "data_offset": 0, 00:09:09.377 "data_size": 65536 00:09:09.377 }, 00:09:09.377 { 00:09:09.377 "name": null, 00:09:09.377 "uuid": "821be8de-cf49-4d56-bd9f-41f2a61898d8", 00:09:09.377 "is_configured": false, 00:09:09.377 "data_offset": 0, 00:09:09.377 "data_size": 65536 00:09:09.377 }, 00:09:09.377 { 00:09:09.377 "name": "BaseBdev3", 00:09:09.377 "uuid": "7ed09f37-192e-4fff-936f-c2b02a4a25e8", 00:09:09.377 "is_configured": true, 00:09:09.377 "data_offset": 0, 00:09:09.377 "data_size": 65536 00:09:09.377 } 00:09:09.377 ] 00:09:09.377 }' 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.377 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.637 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.637 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.637 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:09.637 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.637 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.637 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:09.637 17:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.637 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.637 17:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.637 [2024-12-07 17:25:42.989343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.896 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.896 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.897 "name": "Existed_Raid", 00:09:09.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.897 "strip_size_kb": 64, 00:09:09.897 "state": "configuring", 00:09:09.897 "raid_level": "concat", 00:09:09.897 "superblock": false, 00:09:09.897 "num_base_bdevs": 3, 00:09:09.897 "num_base_bdevs_discovered": 1, 00:09:09.897 "num_base_bdevs_operational": 3, 00:09:09.897 "base_bdevs_list": [ 00:09:09.897 { 00:09:09.897 "name": null, 00:09:09.897 "uuid": "9d8e66ca-4e78-4660-8879-a20be1cf6249", 00:09:09.897 "is_configured": false, 00:09:09.897 "data_offset": 0, 00:09:09.897 "data_size": 65536 00:09:09.897 }, 00:09:09.897 { 00:09:09.897 "name": null, 00:09:09.897 "uuid": "821be8de-cf49-4d56-bd9f-41f2a61898d8", 00:09:09.897 "is_configured": false, 00:09:09.897 "data_offset": 0, 00:09:09.897 "data_size": 65536 00:09:09.897 }, 00:09:09.897 { 00:09:09.897 "name": "BaseBdev3", 00:09:09.897 "uuid": "7ed09f37-192e-4fff-936f-c2b02a4a25e8", 00:09:09.897 "is_configured": true, 00:09:09.897 "data_offset": 0, 00:09:09.897 "data_size": 65536 00:09:09.897 } 00:09:09.897 ] 00:09:09.897 }' 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.897 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.466 [2024-12-07 17:25:43.598049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.466 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.466 "name": "Existed_Raid", 00:09:10.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.466 "strip_size_kb": 64, 00:09:10.466 "state": "configuring", 00:09:10.466 "raid_level": "concat", 00:09:10.466 "superblock": false, 00:09:10.466 "num_base_bdevs": 3, 00:09:10.466 "num_base_bdevs_discovered": 2, 00:09:10.466 "num_base_bdevs_operational": 3, 00:09:10.466 "base_bdevs_list": [ 00:09:10.466 { 00:09:10.466 "name": null, 00:09:10.466 "uuid": "9d8e66ca-4e78-4660-8879-a20be1cf6249", 00:09:10.466 "is_configured": false, 00:09:10.466 "data_offset": 0, 00:09:10.467 "data_size": 65536 00:09:10.467 }, 00:09:10.467 { 00:09:10.467 "name": "BaseBdev2", 00:09:10.467 "uuid": "821be8de-cf49-4d56-bd9f-41f2a61898d8", 00:09:10.467 "is_configured": true, 00:09:10.467 "data_offset": 0, 00:09:10.467 "data_size": 65536 00:09:10.467 }, 00:09:10.467 { 00:09:10.467 "name": "BaseBdev3", 00:09:10.467 "uuid": "7ed09f37-192e-4fff-936f-c2b02a4a25e8", 00:09:10.467 "is_configured": true, 00:09:10.467 "data_offset": 0, 00:09:10.467 "data_size": 65536 00:09:10.467 } 00:09:10.467 ] 00:09:10.467 }' 00:09:10.467 17:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.467 17:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.726 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.726 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:10.726 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.726 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.726 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.726 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:10.726 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:10.727 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.727 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.727 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.727 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.727 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9d8e66ca-4e78-4660-8879-a20be1cf6249 00:09:10.727 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.727 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.987 [2024-12-07 17:25:44.133354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:10.987 [2024-12-07 17:25:44.133396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:10.987 [2024-12-07 17:25:44.133405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:10.987 [2024-12-07 17:25:44.133634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:10.987 [2024-12-07 17:25:44.133791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:10.987 [2024-12-07 17:25:44.133801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:10.987 [2024-12-07 17:25:44.134097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.987 NewBaseBdev 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.987 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.988 [ 00:09:10.988 { 00:09:10.988 "name": "NewBaseBdev", 00:09:10.988 "aliases": [ 00:09:10.988 "9d8e66ca-4e78-4660-8879-a20be1cf6249" 00:09:10.988 ], 00:09:10.988 "product_name": "Malloc disk", 00:09:10.988 "block_size": 512, 00:09:10.988 "num_blocks": 65536, 00:09:10.988 "uuid": "9d8e66ca-4e78-4660-8879-a20be1cf6249", 00:09:10.988 "assigned_rate_limits": { 00:09:10.988 "rw_ios_per_sec": 0, 00:09:10.988 "rw_mbytes_per_sec": 0, 00:09:10.988 "r_mbytes_per_sec": 0, 00:09:10.988 "w_mbytes_per_sec": 0 00:09:10.988 }, 00:09:10.988 "claimed": true, 00:09:10.988 "claim_type": "exclusive_write", 00:09:10.988 "zoned": false, 00:09:10.988 "supported_io_types": { 00:09:10.988 "read": true, 00:09:10.988 "write": true, 00:09:10.988 "unmap": true, 00:09:10.988 "flush": true, 00:09:10.988 "reset": true, 00:09:10.988 "nvme_admin": false, 00:09:10.988 "nvme_io": false, 00:09:10.988 "nvme_io_md": false, 00:09:10.988 "write_zeroes": true, 00:09:10.988 "zcopy": true, 00:09:10.988 "get_zone_info": false, 00:09:10.988 "zone_management": false, 00:09:10.988 "zone_append": false, 00:09:10.988 "compare": false, 00:09:10.988 "compare_and_write": false, 00:09:10.988 "abort": true, 00:09:10.988 "seek_hole": false, 00:09:10.988 "seek_data": false, 00:09:10.988 "copy": true, 00:09:10.988 "nvme_iov_md": false 00:09:10.988 }, 00:09:10.988 "memory_domains": [ 00:09:10.988 { 00:09:10.988 "dma_device_id": "system", 00:09:10.988 "dma_device_type": 1 00:09:10.988 }, 00:09:10.988 { 00:09:10.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.988 "dma_device_type": 2 00:09:10.988 } 00:09:10.988 ], 00:09:10.988 "driver_specific": {} 00:09:10.988 } 00:09:10.988 ] 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.988 "name": "Existed_Raid", 00:09:10.988 "uuid": "a8a3a6bf-9409-41ca-bf98-f3b7a371a717", 00:09:10.988 "strip_size_kb": 64, 00:09:10.988 "state": "online", 00:09:10.988 "raid_level": "concat", 00:09:10.988 "superblock": false, 00:09:10.988 "num_base_bdevs": 3, 00:09:10.988 "num_base_bdevs_discovered": 3, 00:09:10.988 "num_base_bdevs_operational": 3, 00:09:10.988 "base_bdevs_list": [ 00:09:10.988 { 00:09:10.988 "name": "NewBaseBdev", 00:09:10.988 "uuid": "9d8e66ca-4e78-4660-8879-a20be1cf6249", 00:09:10.988 "is_configured": true, 00:09:10.988 "data_offset": 0, 00:09:10.988 "data_size": 65536 00:09:10.988 }, 00:09:10.988 { 00:09:10.988 "name": "BaseBdev2", 00:09:10.988 "uuid": "821be8de-cf49-4d56-bd9f-41f2a61898d8", 00:09:10.988 "is_configured": true, 00:09:10.988 "data_offset": 0, 00:09:10.988 "data_size": 65536 00:09:10.988 }, 00:09:10.988 { 00:09:10.988 "name": "BaseBdev3", 00:09:10.988 "uuid": "7ed09f37-192e-4fff-936f-c2b02a4a25e8", 00:09:10.988 "is_configured": true, 00:09:10.988 "data_offset": 0, 00:09:10.988 "data_size": 65536 00:09:10.988 } 00:09:10.988 ] 00:09:10.988 }' 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.988 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.248 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.248 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.248 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.248 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.248 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.248 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.248 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.248 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.248 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.248 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.248 [2024-12-07 17:25:44.620855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.508 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.508 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.508 "name": "Existed_Raid", 00:09:11.508 "aliases": [ 00:09:11.508 "a8a3a6bf-9409-41ca-bf98-f3b7a371a717" 00:09:11.508 ], 00:09:11.508 "product_name": "Raid Volume", 00:09:11.508 "block_size": 512, 00:09:11.508 "num_blocks": 196608, 00:09:11.508 "uuid": "a8a3a6bf-9409-41ca-bf98-f3b7a371a717", 00:09:11.508 "assigned_rate_limits": { 00:09:11.508 "rw_ios_per_sec": 0, 00:09:11.508 "rw_mbytes_per_sec": 0, 00:09:11.508 "r_mbytes_per_sec": 0, 00:09:11.508 "w_mbytes_per_sec": 0 00:09:11.508 }, 00:09:11.508 "claimed": false, 00:09:11.508 "zoned": false, 00:09:11.508 "supported_io_types": { 00:09:11.508 "read": true, 00:09:11.508 "write": true, 00:09:11.508 "unmap": true, 00:09:11.508 "flush": true, 00:09:11.508 "reset": true, 00:09:11.508 "nvme_admin": false, 00:09:11.509 "nvme_io": false, 00:09:11.509 "nvme_io_md": false, 00:09:11.509 "write_zeroes": true, 00:09:11.509 "zcopy": false, 00:09:11.509 "get_zone_info": false, 00:09:11.509 "zone_management": false, 00:09:11.509 "zone_append": false, 00:09:11.509 "compare": false, 00:09:11.509 "compare_and_write": false, 00:09:11.509 "abort": false, 00:09:11.509 "seek_hole": false, 00:09:11.509 "seek_data": false, 00:09:11.509 "copy": false, 00:09:11.509 "nvme_iov_md": false 00:09:11.509 }, 00:09:11.509 "memory_domains": [ 00:09:11.509 { 00:09:11.509 "dma_device_id": "system", 00:09:11.509 "dma_device_type": 1 00:09:11.509 }, 00:09:11.509 { 00:09:11.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.509 "dma_device_type": 2 00:09:11.509 }, 00:09:11.509 { 00:09:11.509 "dma_device_id": "system", 00:09:11.509 "dma_device_type": 1 00:09:11.509 }, 00:09:11.509 { 00:09:11.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.509 "dma_device_type": 2 00:09:11.509 }, 00:09:11.509 { 00:09:11.509 "dma_device_id": "system", 00:09:11.509 "dma_device_type": 1 00:09:11.509 }, 00:09:11.509 { 00:09:11.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.509 "dma_device_type": 2 00:09:11.509 } 00:09:11.509 ], 00:09:11.509 "driver_specific": { 00:09:11.509 "raid": { 00:09:11.509 "uuid": "a8a3a6bf-9409-41ca-bf98-f3b7a371a717", 00:09:11.509 "strip_size_kb": 64, 00:09:11.509 "state": "online", 00:09:11.509 "raid_level": "concat", 00:09:11.509 "superblock": false, 00:09:11.509 "num_base_bdevs": 3, 00:09:11.509 "num_base_bdevs_discovered": 3, 00:09:11.509 "num_base_bdevs_operational": 3, 00:09:11.509 "base_bdevs_list": [ 00:09:11.509 { 00:09:11.509 "name": "NewBaseBdev", 00:09:11.509 "uuid": "9d8e66ca-4e78-4660-8879-a20be1cf6249", 00:09:11.509 "is_configured": true, 00:09:11.509 "data_offset": 0, 00:09:11.509 "data_size": 65536 00:09:11.509 }, 00:09:11.509 { 00:09:11.509 "name": "BaseBdev2", 00:09:11.509 "uuid": "821be8de-cf49-4d56-bd9f-41f2a61898d8", 00:09:11.509 "is_configured": true, 00:09:11.509 "data_offset": 0, 00:09:11.509 "data_size": 65536 00:09:11.509 }, 00:09:11.509 { 00:09:11.509 "name": "BaseBdev3", 00:09:11.509 "uuid": "7ed09f37-192e-4fff-936f-c2b02a4a25e8", 00:09:11.509 "is_configured": true, 00:09:11.509 "data_offset": 0, 00:09:11.509 "data_size": 65536 00:09:11.509 } 00:09:11.509 ] 00:09:11.509 } 00:09:11.509 } 00:09:11.509 }' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:11.509 BaseBdev2 00:09:11.509 BaseBdev3' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.509 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.509 [2024-12-07 17:25:44.884089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.509 [2024-12-07 17:25:44.884118] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.509 [2024-12-07 17:25:44.884198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.509 [2024-12-07 17:25:44.884266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.509 [2024-12-07 17:25:44.884278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65629 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65629 ']' 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65629 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65629 00:09:11.770 killing process with pid 65629 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65629' 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65629 00:09:11.770 [2024-12-07 17:25:44.930583] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.770 17:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65629 00:09:12.028 [2024-12-07 17:25:45.218436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.966 ************************************ 00:09:12.966 END TEST raid_state_function_test 00:09:12.966 ************************************ 00:09:12.966 17:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:12.966 00:09:12.966 real 0m10.415s 00:09:12.966 user 0m16.587s 00:09:12.966 sys 0m1.812s 00:09:12.966 17:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.966 17:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.227 17:25:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:13.227 17:25:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:13.227 17:25:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.227 17:25:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.227 ************************************ 00:09:13.227 START TEST raid_state_function_test_sb 00:09:13.227 ************************************ 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:13.227 Process raid pid: 66247 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66247 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66247' 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66247 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66247 ']' 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.227 17:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.227 [2024-12-07 17:25:46.466740] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:13.227 [2024-12-07 17:25:46.466916] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.589 [2024-12-07 17:25:46.640622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.589 [2024-12-07 17:25:46.752845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.863 [2024-12-07 17:25:46.951444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.863 [2024-12-07 17:25:46.951582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.123 [2024-12-07 17:25:47.297497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.123 [2024-12-07 17:25:47.297613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.123 [2024-12-07 17:25:47.297654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.123 [2024-12-07 17:25:47.297678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.123 [2024-12-07 17:25:47.297697] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.123 [2024-12-07 17:25:47.297717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.123 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.124 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.124 "name": "Existed_Raid", 00:09:14.124 "uuid": "107dc8bf-3cbd-4918-8ad1-0c2d261d1c53", 00:09:14.124 "strip_size_kb": 64, 00:09:14.124 "state": "configuring", 00:09:14.124 "raid_level": "concat", 00:09:14.124 "superblock": true, 00:09:14.124 "num_base_bdevs": 3, 00:09:14.124 "num_base_bdevs_discovered": 0, 00:09:14.124 "num_base_bdevs_operational": 3, 00:09:14.124 "base_bdevs_list": [ 00:09:14.124 { 00:09:14.124 "name": "BaseBdev1", 00:09:14.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.124 "is_configured": false, 00:09:14.124 "data_offset": 0, 00:09:14.124 "data_size": 0 00:09:14.124 }, 00:09:14.124 { 00:09:14.124 "name": "BaseBdev2", 00:09:14.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.124 "is_configured": false, 00:09:14.124 "data_offset": 0, 00:09:14.124 "data_size": 0 00:09:14.124 }, 00:09:14.124 { 00:09:14.124 "name": "BaseBdev3", 00:09:14.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.124 "is_configured": false, 00:09:14.124 "data_offset": 0, 00:09:14.124 "data_size": 0 00:09:14.124 } 00:09:14.124 ] 00:09:14.124 }' 00:09:14.124 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.124 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.384 [2024-12-07 17:25:47.716711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.384 [2024-12-07 17:25:47.716749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.384 [2024-12-07 17:25:47.728695] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.384 [2024-12-07 17:25:47.728780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.384 [2024-12-07 17:25:47.728793] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.384 [2024-12-07 17:25:47.728802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.384 [2024-12-07 17:25:47.728808] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.384 [2024-12-07 17:25:47.728817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.384 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.645 [2024-12-07 17:25:47.776155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.645 BaseBdev1 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.645 [ 00:09:14.645 { 00:09:14.645 "name": "BaseBdev1", 00:09:14.645 "aliases": [ 00:09:14.645 "b9ea9ba8-508c-4ff8-bf8e-4ba89bf5143f" 00:09:14.645 ], 00:09:14.645 "product_name": "Malloc disk", 00:09:14.645 "block_size": 512, 00:09:14.645 "num_blocks": 65536, 00:09:14.645 "uuid": "b9ea9ba8-508c-4ff8-bf8e-4ba89bf5143f", 00:09:14.645 "assigned_rate_limits": { 00:09:14.645 "rw_ios_per_sec": 0, 00:09:14.645 "rw_mbytes_per_sec": 0, 00:09:14.645 "r_mbytes_per_sec": 0, 00:09:14.645 "w_mbytes_per_sec": 0 00:09:14.645 }, 00:09:14.645 "claimed": true, 00:09:14.645 "claim_type": "exclusive_write", 00:09:14.645 "zoned": false, 00:09:14.645 "supported_io_types": { 00:09:14.645 "read": true, 00:09:14.645 "write": true, 00:09:14.645 "unmap": true, 00:09:14.645 "flush": true, 00:09:14.645 "reset": true, 00:09:14.645 "nvme_admin": false, 00:09:14.645 "nvme_io": false, 00:09:14.645 "nvme_io_md": false, 00:09:14.645 "write_zeroes": true, 00:09:14.645 "zcopy": true, 00:09:14.645 "get_zone_info": false, 00:09:14.645 "zone_management": false, 00:09:14.645 "zone_append": false, 00:09:14.645 "compare": false, 00:09:14.645 "compare_and_write": false, 00:09:14.645 "abort": true, 00:09:14.645 "seek_hole": false, 00:09:14.645 "seek_data": false, 00:09:14.645 "copy": true, 00:09:14.645 "nvme_iov_md": false 00:09:14.645 }, 00:09:14.645 "memory_domains": [ 00:09:14.645 { 00:09:14.645 "dma_device_id": "system", 00:09:14.645 "dma_device_type": 1 00:09:14.645 }, 00:09:14.645 { 00:09:14.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.645 "dma_device_type": 2 00:09:14.645 } 00:09:14.645 ], 00:09:14.645 "driver_specific": {} 00:09:14.645 } 00:09:14.645 ] 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:14.645 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.646 "name": "Existed_Raid", 00:09:14.646 "uuid": "aea4d174-e00c-46ff-985d-0f3b9bad496e", 00:09:14.646 "strip_size_kb": 64, 00:09:14.646 "state": "configuring", 00:09:14.646 "raid_level": "concat", 00:09:14.646 "superblock": true, 00:09:14.646 "num_base_bdevs": 3, 00:09:14.646 "num_base_bdevs_discovered": 1, 00:09:14.646 "num_base_bdevs_operational": 3, 00:09:14.646 "base_bdevs_list": [ 00:09:14.646 { 00:09:14.646 "name": "BaseBdev1", 00:09:14.646 "uuid": "b9ea9ba8-508c-4ff8-bf8e-4ba89bf5143f", 00:09:14.646 "is_configured": true, 00:09:14.646 "data_offset": 2048, 00:09:14.646 "data_size": 63488 00:09:14.646 }, 00:09:14.646 { 00:09:14.646 "name": "BaseBdev2", 00:09:14.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.646 "is_configured": false, 00:09:14.646 "data_offset": 0, 00:09:14.646 "data_size": 0 00:09:14.646 }, 00:09:14.646 { 00:09:14.646 "name": "BaseBdev3", 00:09:14.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.646 "is_configured": false, 00:09:14.646 "data_offset": 0, 00:09:14.646 "data_size": 0 00:09:14.646 } 00:09:14.646 ] 00:09:14.646 }' 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.646 17:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.907 [2024-12-07 17:25:48.255349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.907 [2024-12-07 17:25:48.255443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.907 [2024-12-07 17:25:48.263398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.907 [2024-12-07 17:25:48.265198] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.907 [2024-12-07 17:25:48.265241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.907 [2024-12-07 17:25:48.265251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.907 [2024-12-07 17:25:48.265276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.907 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.167 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.167 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.167 "name": "Existed_Raid", 00:09:15.167 "uuid": "283a1c64-1ee2-4e28-b474-f4bef74c0a55", 00:09:15.167 "strip_size_kb": 64, 00:09:15.167 "state": "configuring", 00:09:15.167 "raid_level": "concat", 00:09:15.167 "superblock": true, 00:09:15.167 "num_base_bdevs": 3, 00:09:15.167 "num_base_bdevs_discovered": 1, 00:09:15.167 "num_base_bdevs_operational": 3, 00:09:15.167 "base_bdevs_list": [ 00:09:15.167 { 00:09:15.167 "name": "BaseBdev1", 00:09:15.167 "uuid": "b9ea9ba8-508c-4ff8-bf8e-4ba89bf5143f", 00:09:15.167 "is_configured": true, 00:09:15.167 "data_offset": 2048, 00:09:15.167 "data_size": 63488 00:09:15.167 }, 00:09:15.167 { 00:09:15.167 "name": "BaseBdev2", 00:09:15.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.167 "is_configured": false, 00:09:15.167 "data_offset": 0, 00:09:15.167 "data_size": 0 00:09:15.167 }, 00:09:15.167 { 00:09:15.167 "name": "BaseBdev3", 00:09:15.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.167 "is_configured": false, 00:09:15.167 "data_offset": 0, 00:09:15.167 "data_size": 0 00:09:15.167 } 00:09:15.167 ] 00:09:15.167 }' 00:09:15.167 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.167 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.428 [2024-12-07 17:25:48.730733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.428 BaseBdev2 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.428 [ 00:09:15.428 { 00:09:15.428 "name": "BaseBdev2", 00:09:15.428 "aliases": [ 00:09:15.428 "892fdc09-b84b-4d92-bf00-186733686fc1" 00:09:15.428 ], 00:09:15.428 "product_name": "Malloc disk", 00:09:15.428 "block_size": 512, 00:09:15.428 "num_blocks": 65536, 00:09:15.428 "uuid": "892fdc09-b84b-4d92-bf00-186733686fc1", 00:09:15.428 "assigned_rate_limits": { 00:09:15.428 "rw_ios_per_sec": 0, 00:09:15.428 "rw_mbytes_per_sec": 0, 00:09:15.428 "r_mbytes_per_sec": 0, 00:09:15.428 "w_mbytes_per_sec": 0 00:09:15.428 }, 00:09:15.428 "claimed": true, 00:09:15.428 "claim_type": "exclusive_write", 00:09:15.428 "zoned": false, 00:09:15.428 "supported_io_types": { 00:09:15.428 "read": true, 00:09:15.428 "write": true, 00:09:15.428 "unmap": true, 00:09:15.428 "flush": true, 00:09:15.428 "reset": true, 00:09:15.428 "nvme_admin": false, 00:09:15.428 "nvme_io": false, 00:09:15.428 "nvme_io_md": false, 00:09:15.428 "write_zeroes": true, 00:09:15.428 "zcopy": true, 00:09:15.428 "get_zone_info": false, 00:09:15.428 "zone_management": false, 00:09:15.428 "zone_append": false, 00:09:15.428 "compare": false, 00:09:15.428 "compare_and_write": false, 00:09:15.428 "abort": true, 00:09:15.428 "seek_hole": false, 00:09:15.428 "seek_data": false, 00:09:15.428 "copy": true, 00:09:15.428 "nvme_iov_md": false 00:09:15.428 }, 00:09:15.428 "memory_domains": [ 00:09:15.428 { 00:09:15.428 "dma_device_id": "system", 00:09:15.428 "dma_device_type": 1 00:09:15.428 }, 00:09:15.428 { 00:09:15.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.428 "dma_device_type": 2 00:09:15.428 } 00:09:15.428 ], 00:09:15.428 "driver_specific": {} 00:09:15.428 } 00:09:15.428 ] 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.428 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.688 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.688 "name": "Existed_Raid", 00:09:15.688 "uuid": "283a1c64-1ee2-4e28-b474-f4bef74c0a55", 00:09:15.688 "strip_size_kb": 64, 00:09:15.688 "state": "configuring", 00:09:15.688 "raid_level": "concat", 00:09:15.688 "superblock": true, 00:09:15.688 "num_base_bdevs": 3, 00:09:15.688 "num_base_bdevs_discovered": 2, 00:09:15.688 "num_base_bdevs_operational": 3, 00:09:15.688 "base_bdevs_list": [ 00:09:15.688 { 00:09:15.688 "name": "BaseBdev1", 00:09:15.688 "uuid": "b9ea9ba8-508c-4ff8-bf8e-4ba89bf5143f", 00:09:15.688 "is_configured": true, 00:09:15.688 "data_offset": 2048, 00:09:15.688 "data_size": 63488 00:09:15.688 }, 00:09:15.688 { 00:09:15.688 "name": "BaseBdev2", 00:09:15.688 "uuid": "892fdc09-b84b-4d92-bf00-186733686fc1", 00:09:15.688 "is_configured": true, 00:09:15.688 "data_offset": 2048, 00:09:15.688 "data_size": 63488 00:09:15.688 }, 00:09:15.688 { 00:09:15.688 "name": "BaseBdev3", 00:09:15.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.688 "is_configured": false, 00:09:15.688 "data_offset": 0, 00:09:15.688 "data_size": 0 00:09:15.688 } 00:09:15.688 ] 00:09:15.688 }' 00:09:15.688 17:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.688 17:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.949 [2024-12-07 17:25:49.254133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.949 [2024-12-07 17:25:49.254441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:15.949 [2024-12-07 17:25:49.254465] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.949 [2024-12-07 17:25:49.254725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:15.949 [2024-12-07 17:25:49.254875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:15.949 [2024-12-07 17:25:49.254884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:15.949 [2024-12-07 17:25:49.255048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.949 BaseBdev3 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.949 [ 00:09:15.949 { 00:09:15.949 "name": "BaseBdev3", 00:09:15.949 "aliases": [ 00:09:15.949 "e847d83e-b0da-4142-ba9e-7f989093f333" 00:09:15.949 ], 00:09:15.949 "product_name": "Malloc disk", 00:09:15.949 "block_size": 512, 00:09:15.949 "num_blocks": 65536, 00:09:15.949 "uuid": "e847d83e-b0da-4142-ba9e-7f989093f333", 00:09:15.949 "assigned_rate_limits": { 00:09:15.949 "rw_ios_per_sec": 0, 00:09:15.949 "rw_mbytes_per_sec": 0, 00:09:15.949 "r_mbytes_per_sec": 0, 00:09:15.949 "w_mbytes_per_sec": 0 00:09:15.949 }, 00:09:15.949 "claimed": true, 00:09:15.949 "claim_type": "exclusive_write", 00:09:15.949 "zoned": false, 00:09:15.949 "supported_io_types": { 00:09:15.949 "read": true, 00:09:15.949 "write": true, 00:09:15.949 "unmap": true, 00:09:15.949 "flush": true, 00:09:15.949 "reset": true, 00:09:15.949 "nvme_admin": false, 00:09:15.949 "nvme_io": false, 00:09:15.949 "nvme_io_md": false, 00:09:15.949 "write_zeroes": true, 00:09:15.949 "zcopy": true, 00:09:15.949 "get_zone_info": false, 00:09:15.949 "zone_management": false, 00:09:15.949 "zone_append": false, 00:09:15.949 "compare": false, 00:09:15.949 "compare_and_write": false, 00:09:15.949 "abort": true, 00:09:15.949 "seek_hole": false, 00:09:15.949 "seek_data": false, 00:09:15.949 "copy": true, 00:09:15.949 "nvme_iov_md": false 00:09:15.949 }, 00:09:15.949 "memory_domains": [ 00:09:15.949 { 00:09:15.949 "dma_device_id": "system", 00:09:15.949 "dma_device_type": 1 00:09:15.949 }, 00:09:15.949 { 00:09:15.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.949 "dma_device_type": 2 00:09:15.949 } 00:09:15.949 ], 00:09:15.949 "driver_specific": {} 00:09:15.949 } 00:09:15.949 ] 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.949 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.950 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.210 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.210 "name": "Existed_Raid", 00:09:16.210 "uuid": "283a1c64-1ee2-4e28-b474-f4bef74c0a55", 00:09:16.210 "strip_size_kb": 64, 00:09:16.210 "state": "online", 00:09:16.210 "raid_level": "concat", 00:09:16.210 "superblock": true, 00:09:16.210 "num_base_bdevs": 3, 00:09:16.210 "num_base_bdevs_discovered": 3, 00:09:16.210 "num_base_bdevs_operational": 3, 00:09:16.210 "base_bdevs_list": [ 00:09:16.210 { 00:09:16.210 "name": "BaseBdev1", 00:09:16.210 "uuid": "b9ea9ba8-508c-4ff8-bf8e-4ba89bf5143f", 00:09:16.210 "is_configured": true, 00:09:16.210 "data_offset": 2048, 00:09:16.210 "data_size": 63488 00:09:16.210 }, 00:09:16.210 { 00:09:16.210 "name": "BaseBdev2", 00:09:16.210 "uuid": "892fdc09-b84b-4d92-bf00-186733686fc1", 00:09:16.210 "is_configured": true, 00:09:16.210 "data_offset": 2048, 00:09:16.210 "data_size": 63488 00:09:16.210 }, 00:09:16.210 { 00:09:16.210 "name": "BaseBdev3", 00:09:16.210 "uuid": "e847d83e-b0da-4142-ba9e-7f989093f333", 00:09:16.210 "is_configured": true, 00:09:16.210 "data_offset": 2048, 00:09:16.210 "data_size": 63488 00:09:16.210 } 00:09:16.210 ] 00:09:16.210 }' 00:09:16.210 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.210 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.470 [2024-12-07 17:25:49.685796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.470 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.470 "name": "Existed_Raid", 00:09:16.470 "aliases": [ 00:09:16.470 "283a1c64-1ee2-4e28-b474-f4bef74c0a55" 00:09:16.470 ], 00:09:16.470 "product_name": "Raid Volume", 00:09:16.470 "block_size": 512, 00:09:16.470 "num_blocks": 190464, 00:09:16.470 "uuid": "283a1c64-1ee2-4e28-b474-f4bef74c0a55", 00:09:16.470 "assigned_rate_limits": { 00:09:16.470 "rw_ios_per_sec": 0, 00:09:16.470 "rw_mbytes_per_sec": 0, 00:09:16.470 "r_mbytes_per_sec": 0, 00:09:16.470 "w_mbytes_per_sec": 0 00:09:16.470 }, 00:09:16.470 "claimed": false, 00:09:16.470 "zoned": false, 00:09:16.470 "supported_io_types": { 00:09:16.470 "read": true, 00:09:16.470 "write": true, 00:09:16.470 "unmap": true, 00:09:16.470 "flush": true, 00:09:16.470 "reset": true, 00:09:16.470 "nvme_admin": false, 00:09:16.470 "nvme_io": false, 00:09:16.470 "nvme_io_md": false, 00:09:16.470 "write_zeroes": true, 00:09:16.470 "zcopy": false, 00:09:16.470 "get_zone_info": false, 00:09:16.470 "zone_management": false, 00:09:16.470 "zone_append": false, 00:09:16.470 "compare": false, 00:09:16.470 "compare_and_write": false, 00:09:16.470 "abort": false, 00:09:16.470 "seek_hole": false, 00:09:16.470 "seek_data": false, 00:09:16.470 "copy": false, 00:09:16.470 "nvme_iov_md": false 00:09:16.470 }, 00:09:16.470 "memory_domains": [ 00:09:16.470 { 00:09:16.471 "dma_device_id": "system", 00:09:16.471 "dma_device_type": 1 00:09:16.471 }, 00:09:16.471 { 00:09:16.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.471 "dma_device_type": 2 00:09:16.471 }, 00:09:16.471 { 00:09:16.471 "dma_device_id": "system", 00:09:16.471 "dma_device_type": 1 00:09:16.471 }, 00:09:16.471 { 00:09:16.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.471 "dma_device_type": 2 00:09:16.471 }, 00:09:16.471 { 00:09:16.471 "dma_device_id": "system", 00:09:16.471 "dma_device_type": 1 00:09:16.471 }, 00:09:16.471 { 00:09:16.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.471 "dma_device_type": 2 00:09:16.471 } 00:09:16.471 ], 00:09:16.471 "driver_specific": { 00:09:16.471 "raid": { 00:09:16.471 "uuid": "283a1c64-1ee2-4e28-b474-f4bef74c0a55", 00:09:16.471 "strip_size_kb": 64, 00:09:16.471 "state": "online", 00:09:16.471 "raid_level": "concat", 00:09:16.471 "superblock": true, 00:09:16.471 "num_base_bdevs": 3, 00:09:16.471 "num_base_bdevs_discovered": 3, 00:09:16.471 "num_base_bdevs_operational": 3, 00:09:16.471 "base_bdevs_list": [ 00:09:16.471 { 00:09:16.471 "name": "BaseBdev1", 00:09:16.471 "uuid": "b9ea9ba8-508c-4ff8-bf8e-4ba89bf5143f", 00:09:16.471 "is_configured": true, 00:09:16.471 "data_offset": 2048, 00:09:16.471 "data_size": 63488 00:09:16.471 }, 00:09:16.471 { 00:09:16.471 "name": "BaseBdev2", 00:09:16.471 "uuid": "892fdc09-b84b-4d92-bf00-186733686fc1", 00:09:16.471 "is_configured": true, 00:09:16.471 "data_offset": 2048, 00:09:16.471 "data_size": 63488 00:09:16.471 }, 00:09:16.471 { 00:09:16.471 "name": "BaseBdev3", 00:09:16.471 "uuid": "e847d83e-b0da-4142-ba9e-7f989093f333", 00:09:16.471 "is_configured": true, 00:09:16.471 "data_offset": 2048, 00:09:16.471 "data_size": 63488 00:09:16.471 } 00:09:16.471 ] 00:09:16.471 } 00:09:16.471 } 00:09:16.471 }' 00:09:16.471 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.471 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.471 BaseBdev2 00:09:16.471 BaseBdev3' 00:09:16.471 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.471 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.471 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.471 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.471 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.471 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.471 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.471 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.731 17:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.731 [2024-12-07 17:25:49.965066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.731 [2024-12-07 17:25:49.965095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.731 [2024-12-07 17:25:49.965149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.731 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.731 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.731 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:16.731 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.731 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.731 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:16.731 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:16.731 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.732 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.991 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.991 "name": "Existed_Raid", 00:09:16.991 "uuid": "283a1c64-1ee2-4e28-b474-f4bef74c0a55", 00:09:16.991 "strip_size_kb": 64, 00:09:16.991 "state": "offline", 00:09:16.991 "raid_level": "concat", 00:09:16.991 "superblock": true, 00:09:16.991 "num_base_bdevs": 3, 00:09:16.991 "num_base_bdevs_discovered": 2, 00:09:16.991 "num_base_bdevs_operational": 2, 00:09:16.991 "base_bdevs_list": [ 00:09:16.991 { 00:09:16.991 "name": null, 00:09:16.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.991 "is_configured": false, 00:09:16.991 "data_offset": 0, 00:09:16.991 "data_size": 63488 00:09:16.991 }, 00:09:16.991 { 00:09:16.991 "name": "BaseBdev2", 00:09:16.991 "uuid": "892fdc09-b84b-4d92-bf00-186733686fc1", 00:09:16.991 "is_configured": true, 00:09:16.991 "data_offset": 2048, 00:09:16.991 "data_size": 63488 00:09:16.991 }, 00:09:16.991 { 00:09:16.991 "name": "BaseBdev3", 00:09:16.991 "uuid": "e847d83e-b0da-4142-ba9e-7f989093f333", 00:09:16.991 "is_configured": true, 00:09:16.991 "data_offset": 2048, 00:09:16.991 "data_size": 63488 00:09:16.991 } 00:09:16.991 ] 00:09:16.991 }' 00:09:16.991 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.991 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.251 [2024-12-07 17:25:50.497116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.251 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.252 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.252 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.252 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.513 [2024-12-07 17:25:50.645523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:17.513 [2024-12-07 17:25:50.645653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.513 BaseBdev2 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.513 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.514 [ 00:09:17.514 { 00:09:17.514 "name": "BaseBdev2", 00:09:17.514 "aliases": [ 00:09:17.514 "00b738d6-2539-4bf8-bb39-f5506bd3dc15" 00:09:17.514 ], 00:09:17.514 "product_name": "Malloc disk", 00:09:17.514 "block_size": 512, 00:09:17.514 "num_blocks": 65536, 00:09:17.514 "uuid": "00b738d6-2539-4bf8-bb39-f5506bd3dc15", 00:09:17.514 "assigned_rate_limits": { 00:09:17.514 "rw_ios_per_sec": 0, 00:09:17.514 "rw_mbytes_per_sec": 0, 00:09:17.514 "r_mbytes_per_sec": 0, 00:09:17.514 "w_mbytes_per_sec": 0 00:09:17.514 }, 00:09:17.514 "claimed": false, 00:09:17.514 "zoned": false, 00:09:17.514 "supported_io_types": { 00:09:17.514 "read": true, 00:09:17.514 "write": true, 00:09:17.514 "unmap": true, 00:09:17.514 "flush": true, 00:09:17.514 "reset": true, 00:09:17.514 "nvme_admin": false, 00:09:17.514 "nvme_io": false, 00:09:17.514 "nvme_io_md": false, 00:09:17.514 "write_zeroes": true, 00:09:17.514 "zcopy": true, 00:09:17.514 "get_zone_info": false, 00:09:17.514 "zone_management": false, 00:09:17.514 "zone_append": false, 00:09:17.514 "compare": false, 00:09:17.514 "compare_and_write": false, 00:09:17.514 "abort": true, 00:09:17.514 "seek_hole": false, 00:09:17.514 "seek_data": false, 00:09:17.514 "copy": true, 00:09:17.514 "nvme_iov_md": false 00:09:17.514 }, 00:09:17.514 "memory_domains": [ 00:09:17.514 { 00:09:17.514 "dma_device_id": "system", 00:09:17.514 "dma_device_type": 1 00:09:17.514 }, 00:09:17.514 { 00:09:17.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.514 "dma_device_type": 2 00:09:17.514 } 00:09:17.514 ], 00:09:17.514 "driver_specific": {} 00:09:17.514 } 00:09:17.514 ] 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.514 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.773 BaseBdev3 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.773 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.773 [ 00:09:17.773 { 00:09:17.773 "name": "BaseBdev3", 00:09:17.773 "aliases": [ 00:09:17.773 "bab5afc9-51e0-4e1e-a941-c4d90cad5997" 00:09:17.773 ], 00:09:17.773 "product_name": "Malloc disk", 00:09:17.773 "block_size": 512, 00:09:17.773 "num_blocks": 65536, 00:09:17.773 "uuid": "bab5afc9-51e0-4e1e-a941-c4d90cad5997", 00:09:17.774 "assigned_rate_limits": { 00:09:17.774 "rw_ios_per_sec": 0, 00:09:17.774 "rw_mbytes_per_sec": 0, 00:09:17.774 "r_mbytes_per_sec": 0, 00:09:17.774 "w_mbytes_per_sec": 0 00:09:17.774 }, 00:09:17.774 "claimed": false, 00:09:17.774 "zoned": false, 00:09:17.774 "supported_io_types": { 00:09:17.774 "read": true, 00:09:17.774 "write": true, 00:09:17.774 "unmap": true, 00:09:17.774 "flush": true, 00:09:17.774 "reset": true, 00:09:17.774 "nvme_admin": false, 00:09:17.774 "nvme_io": false, 00:09:17.774 "nvme_io_md": false, 00:09:17.774 "write_zeroes": true, 00:09:17.774 "zcopy": true, 00:09:17.774 "get_zone_info": false, 00:09:17.774 "zone_management": false, 00:09:17.774 "zone_append": false, 00:09:17.774 "compare": false, 00:09:17.774 "compare_and_write": false, 00:09:17.774 "abort": true, 00:09:17.774 "seek_hole": false, 00:09:17.774 "seek_data": false, 00:09:17.774 "copy": true, 00:09:17.774 "nvme_iov_md": false 00:09:17.774 }, 00:09:17.774 "memory_domains": [ 00:09:17.774 { 00:09:17.774 "dma_device_id": "system", 00:09:17.774 "dma_device_type": 1 00:09:17.774 }, 00:09:17.774 { 00:09:17.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.774 "dma_device_type": 2 00:09:17.774 } 00:09:17.774 ], 00:09:17.774 "driver_specific": {} 00:09:17.774 } 00:09:17.774 ] 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.774 [2024-12-07 17:25:50.953588] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.774 [2024-12-07 17:25:50.953678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.774 [2024-12-07 17:25:50.953735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.774 [2024-12-07 17:25:50.955485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.774 17:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.774 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.774 "name": "Existed_Raid", 00:09:17.774 "uuid": "3348ada0-b16d-41e3-8ae5-0e317af1d54d", 00:09:17.774 "strip_size_kb": 64, 00:09:17.774 "state": "configuring", 00:09:17.774 "raid_level": "concat", 00:09:17.774 "superblock": true, 00:09:17.774 "num_base_bdevs": 3, 00:09:17.774 "num_base_bdevs_discovered": 2, 00:09:17.774 "num_base_bdevs_operational": 3, 00:09:17.774 "base_bdevs_list": [ 00:09:17.774 { 00:09:17.774 "name": "BaseBdev1", 00:09:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.774 "is_configured": false, 00:09:17.774 "data_offset": 0, 00:09:17.774 "data_size": 0 00:09:17.774 }, 00:09:17.774 { 00:09:17.774 "name": "BaseBdev2", 00:09:17.774 "uuid": "00b738d6-2539-4bf8-bb39-f5506bd3dc15", 00:09:17.774 "is_configured": true, 00:09:17.774 "data_offset": 2048, 00:09:17.774 "data_size": 63488 00:09:17.774 }, 00:09:17.774 { 00:09:17.774 "name": "BaseBdev3", 00:09:17.774 "uuid": "bab5afc9-51e0-4e1e-a941-c4d90cad5997", 00:09:17.774 "is_configured": true, 00:09:17.774 "data_offset": 2048, 00:09:17.774 "data_size": 63488 00:09:17.774 } 00:09:17.774 ] 00:09:17.774 }' 00:09:17.774 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.774 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.034 [2024-12-07 17:25:51.364900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.034 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.294 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.294 "name": "Existed_Raid", 00:09:18.294 "uuid": "3348ada0-b16d-41e3-8ae5-0e317af1d54d", 00:09:18.294 "strip_size_kb": 64, 00:09:18.294 "state": "configuring", 00:09:18.294 "raid_level": "concat", 00:09:18.294 "superblock": true, 00:09:18.294 "num_base_bdevs": 3, 00:09:18.294 "num_base_bdevs_discovered": 1, 00:09:18.294 "num_base_bdevs_operational": 3, 00:09:18.294 "base_bdevs_list": [ 00:09:18.294 { 00:09:18.294 "name": "BaseBdev1", 00:09:18.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.294 "is_configured": false, 00:09:18.294 "data_offset": 0, 00:09:18.294 "data_size": 0 00:09:18.294 }, 00:09:18.294 { 00:09:18.294 "name": null, 00:09:18.294 "uuid": "00b738d6-2539-4bf8-bb39-f5506bd3dc15", 00:09:18.294 "is_configured": false, 00:09:18.294 "data_offset": 0, 00:09:18.294 "data_size": 63488 00:09:18.294 }, 00:09:18.294 { 00:09:18.294 "name": "BaseBdev3", 00:09:18.294 "uuid": "bab5afc9-51e0-4e1e-a941-c4d90cad5997", 00:09:18.294 "is_configured": true, 00:09:18.294 "data_offset": 2048, 00:09:18.294 "data_size": 63488 00:09:18.294 } 00:09:18.294 ] 00:09:18.294 }' 00:09:18.294 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.294 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.554 [2024-12-07 17:25:51.872024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.554 BaseBdev1 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.554 [ 00:09:18.554 { 00:09:18.554 "name": "BaseBdev1", 00:09:18.554 "aliases": [ 00:09:18.554 "80a14aec-ae8e-4b63-9ee5-188ff69f1219" 00:09:18.554 ], 00:09:18.554 "product_name": "Malloc disk", 00:09:18.554 "block_size": 512, 00:09:18.554 "num_blocks": 65536, 00:09:18.554 "uuid": "80a14aec-ae8e-4b63-9ee5-188ff69f1219", 00:09:18.554 "assigned_rate_limits": { 00:09:18.554 "rw_ios_per_sec": 0, 00:09:18.554 "rw_mbytes_per_sec": 0, 00:09:18.554 "r_mbytes_per_sec": 0, 00:09:18.554 "w_mbytes_per_sec": 0 00:09:18.554 }, 00:09:18.554 "claimed": true, 00:09:18.554 "claim_type": "exclusive_write", 00:09:18.554 "zoned": false, 00:09:18.554 "supported_io_types": { 00:09:18.554 "read": true, 00:09:18.554 "write": true, 00:09:18.554 "unmap": true, 00:09:18.554 "flush": true, 00:09:18.554 "reset": true, 00:09:18.554 "nvme_admin": false, 00:09:18.554 "nvme_io": false, 00:09:18.554 "nvme_io_md": false, 00:09:18.554 "write_zeroes": true, 00:09:18.554 "zcopy": true, 00:09:18.554 "get_zone_info": false, 00:09:18.554 "zone_management": false, 00:09:18.554 "zone_append": false, 00:09:18.554 "compare": false, 00:09:18.554 "compare_and_write": false, 00:09:18.554 "abort": true, 00:09:18.554 "seek_hole": false, 00:09:18.554 "seek_data": false, 00:09:18.554 "copy": true, 00:09:18.554 "nvme_iov_md": false 00:09:18.554 }, 00:09:18.554 "memory_domains": [ 00:09:18.554 { 00:09:18.554 "dma_device_id": "system", 00:09:18.554 "dma_device_type": 1 00:09:18.554 }, 00:09:18.554 { 00:09:18.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.554 "dma_device_type": 2 00:09:18.554 } 00:09:18.554 ], 00:09:18.554 "driver_specific": {} 00:09:18.554 } 00:09:18.554 ] 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:18.554 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.555 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.815 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.815 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.815 "name": "Existed_Raid", 00:09:18.815 "uuid": "3348ada0-b16d-41e3-8ae5-0e317af1d54d", 00:09:18.815 "strip_size_kb": 64, 00:09:18.815 "state": "configuring", 00:09:18.815 "raid_level": "concat", 00:09:18.815 "superblock": true, 00:09:18.815 "num_base_bdevs": 3, 00:09:18.815 "num_base_bdevs_discovered": 2, 00:09:18.815 "num_base_bdevs_operational": 3, 00:09:18.815 "base_bdevs_list": [ 00:09:18.815 { 00:09:18.815 "name": "BaseBdev1", 00:09:18.815 "uuid": "80a14aec-ae8e-4b63-9ee5-188ff69f1219", 00:09:18.815 "is_configured": true, 00:09:18.815 "data_offset": 2048, 00:09:18.815 "data_size": 63488 00:09:18.815 }, 00:09:18.815 { 00:09:18.815 "name": null, 00:09:18.815 "uuid": "00b738d6-2539-4bf8-bb39-f5506bd3dc15", 00:09:18.815 "is_configured": false, 00:09:18.815 "data_offset": 0, 00:09:18.815 "data_size": 63488 00:09:18.815 }, 00:09:18.815 { 00:09:18.815 "name": "BaseBdev3", 00:09:18.815 "uuid": "bab5afc9-51e0-4e1e-a941-c4d90cad5997", 00:09:18.815 "is_configured": true, 00:09:18.815 "data_offset": 2048, 00:09:18.815 "data_size": 63488 00:09:18.815 } 00:09:18.815 ] 00:09:18.815 }' 00:09:18.815 17:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.815 17:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.074 [2024-12-07 17:25:52.387218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.074 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.075 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.075 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.075 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.075 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.075 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.075 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.075 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.075 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.075 "name": "Existed_Raid", 00:09:19.075 "uuid": "3348ada0-b16d-41e3-8ae5-0e317af1d54d", 00:09:19.075 "strip_size_kb": 64, 00:09:19.075 "state": "configuring", 00:09:19.075 "raid_level": "concat", 00:09:19.075 "superblock": true, 00:09:19.075 "num_base_bdevs": 3, 00:09:19.075 "num_base_bdevs_discovered": 1, 00:09:19.075 "num_base_bdevs_operational": 3, 00:09:19.075 "base_bdevs_list": [ 00:09:19.075 { 00:09:19.075 "name": "BaseBdev1", 00:09:19.075 "uuid": "80a14aec-ae8e-4b63-9ee5-188ff69f1219", 00:09:19.075 "is_configured": true, 00:09:19.075 "data_offset": 2048, 00:09:19.075 "data_size": 63488 00:09:19.075 }, 00:09:19.075 { 00:09:19.075 "name": null, 00:09:19.075 "uuid": "00b738d6-2539-4bf8-bb39-f5506bd3dc15", 00:09:19.075 "is_configured": false, 00:09:19.075 "data_offset": 0, 00:09:19.075 "data_size": 63488 00:09:19.075 }, 00:09:19.075 { 00:09:19.075 "name": null, 00:09:19.075 "uuid": "bab5afc9-51e0-4e1e-a941-c4d90cad5997", 00:09:19.075 "is_configured": false, 00:09:19.075 "data_offset": 0, 00:09:19.075 "data_size": 63488 00:09:19.075 } 00:09:19.075 ] 00:09:19.075 }' 00:09:19.075 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.075 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.644 [2024-12-07 17:25:52.918326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.644 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.644 "name": "Existed_Raid", 00:09:19.644 "uuid": "3348ada0-b16d-41e3-8ae5-0e317af1d54d", 00:09:19.644 "strip_size_kb": 64, 00:09:19.644 "state": "configuring", 00:09:19.644 "raid_level": "concat", 00:09:19.644 "superblock": true, 00:09:19.644 "num_base_bdevs": 3, 00:09:19.644 "num_base_bdevs_discovered": 2, 00:09:19.645 "num_base_bdevs_operational": 3, 00:09:19.645 "base_bdevs_list": [ 00:09:19.645 { 00:09:19.645 "name": "BaseBdev1", 00:09:19.645 "uuid": "80a14aec-ae8e-4b63-9ee5-188ff69f1219", 00:09:19.645 "is_configured": true, 00:09:19.645 "data_offset": 2048, 00:09:19.645 "data_size": 63488 00:09:19.645 }, 00:09:19.645 { 00:09:19.645 "name": null, 00:09:19.645 "uuid": "00b738d6-2539-4bf8-bb39-f5506bd3dc15", 00:09:19.645 "is_configured": false, 00:09:19.645 "data_offset": 0, 00:09:19.645 "data_size": 63488 00:09:19.645 }, 00:09:19.645 { 00:09:19.645 "name": "BaseBdev3", 00:09:19.645 "uuid": "bab5afc9-51e0-4e1e-a941-c4d90cad5997", 00:09:19.645 "is_configured": true, 00:09:19.645 "data_offset": 2048, 00:09:19.645 "data_size": 63488 00:09:19.645 } 00:09:19.645 ] 00:09:19.645 }' 00:09:19.645 17:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.645 17:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.213 [2024-12-07 17:25:53.385557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.213 "name": "Existed_Raid", 00:09:20.213 "uuid": "3348ada0-b16d-41e3-8ae5-0e317af1d54d", 00:09:20.213 "strip_size_kb": 64, 00:09:20.213 "state": "configuring", 00:09:20.213 "raid_level": "concat", 00:09:20.213 "superblock": true, 00:09:20.213 "num_base_bdevs": 3, 00:09:20.213 "num_base_bdevs_discovered": 1, 00:09:20.213 "num_base_bdevs_operational": 3, 00:09:20.213 "base_bdevs_list": [ 00:09:20.213 { 00:09:20.213 "name": null, 00:09:20.213 "uuid": "80a14aec-ae8e-4b63-9ee5-188ff69f1219", 00:09:20.213 "is_configured": false, 00:09:20.213 "data_offset": 0, 00:09:20.213 "data_size": 63488 00:09:20.213 }, 00:09:20.213 { 00:09:20.213 "name": null, 00:09:20.213 "uuid": "00b738d6-2539-4bf8-bb39-f5506bd3dc15", 00:09:20.213 "is_configured": false, 00:09:20.213 "data_offset": 0, 00:09:20.213 "data_size": 63488 00:09:20.213 }, 00:09:20.213 { 00:09:20.213 "name": "BaseBdev3", 00:09:20.213 "uuid": "bab5afc9-51e0-4e1e-a941-c4d90cad5997", 00:09:20.213 "is_configured": true, 00:09:20.213 "data_offset": 2048, 00:09:20.213 "data_size": 63488 00:09:20.213 } 00:09:20.213 ] 00:09:20.213 }' 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.213 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.782 [2024-12-07 17:25:53.981262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.782 17:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.782 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.782 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.782 "name": "Existed_Raid", 00:09:20.782 "uuid": "3348ada0-b16d-41e3-8ae5-0e317af1d54d", 00:09:20.782 "strip_size_kb": 64, 00:09:20.782 "state": "configuring", 00:09:20.782 "raid_level": "concat", 00:09:20.782 "superblock": true, 00:09:20.782 "num_base_bdevs": 3, 00:09:20.782 "num_base_bdevs_discovered": 2, 00:09:20.782 "num_base_bdevs_operational": 3, 00:09:20.782 "base_bdevs_list": [ 00:09:20.782 { 00:09:20.782 "name": null, 00:09:20.782 "uuid": "80a14aec-ae8e-4b63-9ee5-188ff69f1219", 00:09:20.782 "is_configured": false, 00:09:20.782 "data_offset": 0, 00:09:20.782 "data_size": 63488 00:09:20.782 }, 00:09:20.782 { 00:09:20.782 "name": "BaseBdev2", 00:09:20.782 "uuid": "00b738d6-2539-4bf8-bb39-f5506bd3dc15", 00:09:20.782 "is_configured": true, 00:09:20.782 "data_offset": 2048, 00:09:20.782 "data_size": 63488 00:09:20.782 }, 00:09:20.782 { 00:09:20.782 "name": "BaseBdev3", 00:09:20.782 "uuid": "bab5afc9-51e0-4e1e-a941-c4d90cad5997", 00:09:20.782 "is_configured": true, 00:09:20.782 "data_offset": 2048, 00:09:20.782 "data_size": 63488 00:09:20.782 } 00:09:20.782 ] 00:09:20.782 }' 00:09:20.782 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.782 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.041 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:21.041 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.041 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.041 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.300 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.300 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 80a14aec-ae8e-4b63-9ee5-188ff69f1219 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.301 [2024-12-07 17:25:54.532239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:21.301 [2024-12-07 17:25:54.532565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:21.301 [2024-12-07 17:25:54.532605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.301 [2024-12-07 17:25:54.532891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:21.301 [2024-12-07 17:25:54.533090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:21.301 [2024-12-07 17:25:54.533131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:21.301 [2024-12-07 17:25:54.533304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.301 NewBaseBdev 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.301 [ 00:09:21.301 { 00:09:21.301 "name": "NewBaseBdev", 00:09:21.301 "aliases": [ 00:09:21.301 "80a14aec-ae8e-4b63-9ee5-188ff69f1219" 00:09:21.301 ], 00:09:21.301 "product_name": "Malloc disk", 00:09:21.301 "block_size": 512, 00:09:21.301 "num_blocks": 65536, 00:09:21.301 "uuid": "80a14aec-ae8e-4b63-9ee5-188ff69f1219", 00:09:21.301 "assigned_rate_limits": { 00:09:21.301 "rw_ios_per_sec": 0, 00:09:21.301 "rw_mbytes_per_sec": 0, 00:09:21.301 "r_mbytes_per_sec": 0, 00:09:21.301 "w_mbytes_per_sec": 0 00:09:21.301 }, 00:09:21.301 "claimed": true, 00:09:21.301 "claim_type": "exclusive_write", 00:09:21.301 "zoned": false, 00:09:21.301 "supported_io_types": { 00:09:21.301 "read": true, 00:09:21.301 "write": true, 00:09:21.301 "unmap": true, 00:09:21.301 "flush": true, 00:09:21.301 "reset": true, 00:09:21.301 "nvme_admin": false, 00:09:21.301 "nvme_io": false, 00:09:21.301 "nvme_io_md": false, 00:09:21.301 "write_zeroes": true, 00:09:21.301 "zcopy": true, 00:09:21.301 "get_zone_info": false, 00:09:21.301 "zone_management": false, 00:09:21.301 "zone_append": false, 00:09:21.301 "compare": false, 00:09:21.301 "compare_and_write": false, 00:09:21.301 "abort": true, 00:09:21.301 "seek_hole": false, 00:09:21.301 "seek_data": false, 00:09:21.301 "copy": true, 00:09:21.301 "nvme_iov_md": false 00:09:21.301 }, 00:09:21.301 "memory_domains": [ 00:09:21.301 { 00:09:21.301 "dma_device_id": "system", 00:09:21.301 "dma_device_type": 1 00:09:21.301 }, 00:09:21.301 { 00:09:21.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.301 "dma_device_type": 2 00:09:21.301 } 00:09:21.301 ], 00:09:21.301 "driver_specific": {} 00:09:21.301 } 00:09:21.301 ] 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.301 "name": "Existed_Raid", 00:09:21.301 "uuid": "3348ada0-b16d-41e3-8ae5-0e317af1d54d", 00:09:21.301 "strip_size_kb": 64, 00:09:21.301 "state": "online", 00:09:21.301 "raid_level": "concat", 00:09:21.301 "superblock": true, 00:09:21.301 "num_base_bdevs": 3, 00:09:21.301 "num_base_bdevs_discovered": 3, 00:09:21.301 "num_base_bdevs_operational": 3, 00:09:21.301 "base_bdevs_list": [ 00:09:21.301 { 00:09:21.301 "name": "NewBaseBdev", 00:09:21.301 "uuid": "80a14aec-ae8e-4b63-9ee5-188ff69f1219", 00:09:21.301 "is_configured": true, 00:09:21.301 "data_offset": 2048, 00:09:21.301 "data_size": 63488 00:09:21.301 }, 00:09:21.301 { 00:09:21.301 "name": "BaseBdev2", 00:09:21.301 "uuid": "00b738d6-2539-4bf8-bb39-f5506bd3dc15", 00:09:21.301 "is_configured": true, 00:09:21.301 "data_offset": 2048, 00:09:21.301 "data_size": 63488 00:09:21.301 }, 00:09:21.301 { 00:09:21.301 "name": "BaseBdev3", 00:09:21.301 "uuid": "bab5afc9-51e0-4e1e-a941-c4d90cad5997", 00:09:21.301 "is_configured": true, 00:09:21.301 "data_offset": 2048, 00:09:21.301 "data_size": 63488 00:09:21.301 } 00:09:21.301 ] 00:09:21.301 }' 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.301 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.909 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.909 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.909 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.909 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.909 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.909 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.909 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.909 17:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.909 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.909 17:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.909 [2024-12-07 17:25:54.999841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.909 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.909 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.909 "name": "Existed_Raid", 00:09:21.909 "aliases": [ 00:09:21.909 "3348ada0-b16d-41e3-8ae5-0e317af1d54d" 00:09:21.909 ], 00:09:21.909 "product_name": "Raid Volume", 00:09:21.909 "block_size": 512, 00:09:21.909 "num_blocks": 190464, 00:09:21.909 "uuid": "3348ada0-b16d-41e3-8ae5-0e317af1d54d", 00:09:21.909 "assigned_rate_limits": { 00:09:21.909 "rw_ios_per_sec": 0, 00:09:21.909 "rw_mbytes_per_sec": 0, 00:09:21.909 "r_mbytes_per_sec": 0, 00:09:21.909 "w_mbytes_per_sec": 0 00:09:21.909 }, 00:09:21.909 "claimed": false, 00:09:21.909 "zoned": false, 00:09:21.909 "supported_io_types": { 00:09:21.909 "read": true, 00:09:21.909 "write": true, 00:09:21.909 "unmap": true, 00:09:21.909 "flush": true, 00:09:21.909 "reset": true, 00:09:21.909 "nvme_admin": false, 00:09:21.909 "nvme_io": false, 00:09:21.909 "nvme_io_md": false, 00:09:21.909 "write_zeroes": true, 00:09:21.909 "zcopy": false, 00:09:21.909 "get_zone_info": false, 00:09:21.909 "zone_management": false, 00:09:21.909 "zone_append": false, 00:09:21.909 "compare": false, 00:09:21.909 "compare_and_write": false, 00:09:21.909 "abort": false, 00:09:21.909 "seek_hole": false, 00:09:21.909 "seek_data": false, 00:09:21.909 "copy": false, 00:09:21.909 "nvme_iov_md": false 00:09:21.909 }, 00:09:21.909 "memory_domains": [ 00:09:21.909 { 00:09:21.909 "dma_device_id": "system", 00:09:21.909 "dma_device_type": 1 00:09:21.909 }, 00:09:21.909 { 00:09:21.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.909 "dma_device_type": 2 00:09:21.909 }, 00:09:21.909 { 00:09:21.909 "dma_device_id": "system", 00:09:21.909 "dma_device_type": 1 00:09:21.909 }, 00:09:21.909 { 00:09:21.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.909 "dma_device_type": 2 00:09:21.909 }, 00:09:21.909 { 00:09:21.909 "dma_device_id": "system", 00:09:21.909 "dma_device_type": 1 00:09:21.909 }, 00:09:21.909 { 00:09:21.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.910 "dma_device_type": 2 00:09:21.910 } 00:09:21.910 ], 00:09:21.910 "driver_specific": { 00:09:21.910 "raid": { 00:09:21.910 "uuid": "3348ada0-b16d-41e3-8ae5-0e317af1d54d", 00:09:21.910 "strip_size_kb": 64, 00:09:21.910 "state": "online", 00:09:21.910 "raid_level": "concat", 00:09:21.910 "superblock": true, 00:09:21.910 "num_base_bdevs": 3, 00:09:21.910 "num_base_bdevs_discovered": 3, 00:09:21.910 "num_base_bdevs_operational": 3, 00:09:21.910 "base_bdevs_list": [ 00:09:21.910 { 00:09:21.910 "name": "NewBaseBdev", 00:09:21.910 "uuid": "80a14aec-ae8e-4b63-9ee5-188ff69f1219", 00:09:21.910 "is_configured": true, 00:09:21.910 "data_offset": 2048, 00:09:21.910 "data_size": 63488 00:09:21.910 }, 00:09:21.910 { 00:09:21.910 "name": "BaseBdev2", 00:09:21.910 "uuid": "00b738d6-2539-4bf8-bb39-f5506bd3dc15", 00:09:21.910 "is_configured": true, 00:09:21.910 "data_offset": 2048, 00:09:21.910 "data_size": 63488 00:09:21.910 }, 00:09:21.910 { 00:09:21.910 "name": "BaseBdev3", 00:09:21.910 "uuid": "bab5afc9-51e0-4e1e-a941-c4d90cad5997", 00:09:21.910 "is_configured": true, 00:09:21.910 "data_offset": 2048, 00:09:21.910 "data_size": 63488 00:09:21.910 } 00:09:21.910 ] 00:09:21.910 } 00:09:21.910 } 00:09:21.910 }' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:21.910 BaseBdev2 00:09:21.910 BaseBdev3' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.910 [2024-12-07 17:25:55.255163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.910 [2024-12-07 17:25:55.255193] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.910 [2024-12-07 17:25:55.255282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.910 [2024-12-07 17:25:55.255340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.910 [2024-12-07 17:25:55.255353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66247 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66247 ']' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66247 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.910 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66247 00:09:22.168 killing process with pid 66247 00:09:22.168 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.168 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.168 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66247' 00:09:22.168 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66247 00:09:22.168 [2024-12-07 17:25:55.300075] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.168 17:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66247 00:09:22.426 [2024-12-07 17:25:55.588223] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.363 17:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:23.363 00:09:23.363 real 0m10.311s 00:09:23.363 user 0m16.462s 00:09:23.363 sys 0m1.812s 00:09:23.363 17:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.363 17:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.363 ************************************ 00:09:23.363 END TEST raid_state_function_test_sb 00:09:23.363 ************************************ 00:09:23.363 17:25:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:23.363 17:25:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:23.363 17:25:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.363 17:25:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.623 ************************************ 00:09:23.623 START TEST raid_superblock_test 00:09:23.623 ************************************ 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66867 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66867 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66867 ']' 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.623 17:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.623 [2024-12-07 17:25:56.841457] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:23.623 [2024-12-07 17:25:56.841645] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66867 ] 00:09:23.884 [2024-12-07 17:25:57.015809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.884 [2024-12-07 17:25:57.123843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.144 [2024-12-07 17:25:57.317404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.144 [2024-12-07 17:25:57.317496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.416 malloc1 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.416 [2024-12-07 17:25:57.714442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:24.416 [2024-12-07 17:25:57.714596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.416 [2024-12-07 17:25:57.714637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:24.416 [2024-12-07 17:25:57.714666] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.416 [2024-12-07 17:25:57.716824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.416 [2024-12-07 17:25:57.716909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:24.416 pt1 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.416 malloc2 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.416 [2024-12-07 17:25:57.767549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.416 [2024-12-07 17:25:57.767656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.416 [2024-12-07 17:25:57.767717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:24.416 [2024-12-07 17:25:57.767749] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.416 [2024-12-07 17:25:57.769763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.416 [2024-12-07 17:25:57.769829] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.416 pt2 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.416 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.676 malloc3 00:09:24.676 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.676 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:24.676 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.676 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.676 [2024-12-07 17:25:57.833628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:24.676 [2024-12-07 17:25:57.833717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.676 [2024-12-07 17:25:57.833772] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:24.676 [2024-12-07 17:25:57.833800] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.676 [2024-12-07 17:25:57.835814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.677 [2024-12-07 17:25:57.835882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:24.677 pt3 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.677 [2024-12-07 17:25:57.845649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.677 [2024-12-07 17:25:57.847414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.677 [2024-12-07 17:25:57.847479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:24.677 [2024-12-07 17:25:57.847626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:24.677 [2024-12-07 17:25:57.847640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:24.677 [2024-12-07 17:25:57.847871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:24.677 [2024-12-07 17:25:57.848072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:24.677 [2024-12-07 17:25:57.848082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:24.677 [2024-12-07 17:25:57.848230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.677 "name": "raid_bdev1", 00:09:24.677 "uuid": "0cb36fc2-7e01-40a1-b6bc-6e7eac700f49", 00:09:24.677 "strip_size_kb": 64, 00:09:24.677 "state": "online", 00:09:24.677 "raid_level": "concat", 00:09:24.677 "superblock": true, 00:09:24.677 "num_base_bdevs": 3, 00:09:24.677 "num_base_bdevs_discovered": 3, 00:09:24.677 "num_base_bdevs_operational": 3, 00:09:24.677 "base_bdevs_list": [ 00:09:24.677 { 00:09:24.677 "name": "pt1", 00:09:24.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.677 "is_configured": true, 00:09:24.677 "data_offset": 2048, 00:09:24.677 "data_size": 63488 00:09:24.677 }, 00:09:24.677 { 00:09:24.677 "name": "pt2", 00:09:24.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.677 "is_configured": true, 00:09:24.677 "data_offset": 2048, 00:09:24.677 "data_size": 63488 00:09:24.677 }, 00:09:24.677 { 00:09:24.677 "name": "pt3", 00:09:24.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.677 "is_configured": true, 00:09:24.677 "data_offset": 2048, 00:09:24.677 "data_size": 63488 00:09:24.677 } 00:09:24.677 ] 00:09:24.677 }' 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.677 17:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.936 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:24.936 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:24.936 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.936 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.936 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.936 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.936 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:24.936 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.936 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.936 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.936 [2024-12-07 17:25:58.297226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.195 "name": "raid_bdev1", 00:09:25.195 "aliases": [ 00:09:25.195 "0cb36fc2-7e01-40a1-b6bc-6e7eac700f49" 00:09:25.195 ], 00:09:25.195 "product_name": "Raid Volume", 00:09:25.195 "block_size": 512, 00:09:25.195 "num_blocks": 190464, 00:09:25.195 "uuid": "0cb36fc2-7e01-40a1-b6bc-6e7eac700f49", 00:09:25.195 "assigned_rate_limits": { 00:09:25.195 "rw_ios_per_sec": 0, 00:09:25.195 "rw_mbytes_per_sec": 0, 00:09:25.195 "r_mbytes_per_sec": 0, 00:09:25.195 "w_mbytes_per_sec": 0 00:09:25.195 }, 00:09:25.195 "claimed": false, 00:09:25.195 "zoned": false, 00:09:25.195 "supported_io_types": { 00:09:25.195 "read": true, 00:09:25.195 "write": true, 00:09:25.195 "unmap": true, 00:09:25.195 "flush": true, 00:09:25.195 "reset": true, 00:09:25.195 "nvme_admin": false, 00:09:25.195 "nvme_io": false, 00:09:25.195 "nvme_io_md": false, 00:09:25.195 "write_zeroes": true, 00:09:25.195 "zcopy": false, 00:09:25.195 "get_zone_info": false, 00:09:25.195 "zone_management": false, 00:09:25.195 "zone_append": false, 00:09:25.195 "compare": false, 00:09:25.195 "compare_and_write": false, 00:09:25.195 "abort": false, 00:09:25.195 "seek_hole": false, 00:09:25.195 "seek_data": false, 00:09:25.195 "copy": false, 00:09:25.195 "nvme_iov_md": false 00:09:25.195 }, 00:09:25.195 "memory_domains": [ 00:09:25.195 { 00:09:25.195 "dma_device_id": "system", 00:09:25.195 "dma_device_type": 1 00:09:25.195 }, 00:09:25.195 { 00:09:25.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.195 "dma_device_type": 2 00:09:25.195 }, 00:09:25.195 { 00:09:25.195 "dma_device_id": "system", 00:09:25.195 "dma_device_type": 1 00:09:25.195 }, 00:09:25.195 { 00:09:25.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.195 "dma_device_type": 2 00:09:25.195 }, 00:09:25.195 { 00:09:25.195 "dma_device_id": "system", 00:09:25.195 "dma_device_type": 1 00:09:25.195 }, 00:09:25.195 { 00:09:25.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.195 "dma_device_type": 2 00:09:25.195 } 00:09:25.195 ], 00:09:25.195 "driver_specific": { 00:09:25.195 "raid": { 00:09:25.195 "uuid": "0cb36fc2-7e01-40a1-b6bc-6e7eac700f49", 00:09:25.195 "strip_size_kb": 64, 00:09:25.195 "state": "online", 00:09:25.195 "raid_level": "concat", 00:09:25.195 "superblock": true, 00:09:25.195 "num_base_bdevs": 3, 00:09:25.195 "num_base_bdevs_discovered": 3, 00:09:25.195 "num_base_bdevs_operational": 3, 00:09:25.195 "base_bdevs_list": [ 00:09:25.195 { 00:09:25.195 "name": "pt1", 00:09:25.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.195 "is_configured": true, 00:09:25.195 "data_offset": 2048, 00:09:25.195 "data_size": 63488 00:09:25.195 }, 00:09:25.195 { 00:09:25.195 "name": "pt2", 00:09:25.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.195 "is_configured": true, 00:09:25.195 "data_offset": 2048, 00:09:25.195 "data_size": 63488 00:09:25.195 }, 00:09:25.195 { 00:09:25.195 "name": "pt3", 00:09:25.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.195 "is_configured": true, 00:09:25.195 "data_offset": 2048, 00:09:25.195 "data_size": 63488 00:09:25.195 } 00:09:25.195 ] 00:09:25.195 } 00:09:25.195 } 00:09:25.195 }' 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:25.195 pt2 00:09:25.195 pt3' 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:25.195 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.196 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.196 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.196 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.455 [2024-12-07 17:25:58.588659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0cb36fc2-7e01-40a1-b6bc-6e7eac700f49 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0cb36fc2-7e01-40a1-b6bc-6e7eac700f49 ']' 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.455 [2024-12-07 17:25:58.636302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.455 [2024-12-07 17:25:58.636337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.455 [2024-12-07 17:25:58.636437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.455 [2024-12-07 17:25:58.636524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.455 [2024-12-07 17:25:58.636535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.455 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.456 [2024-12-07 17:25:58.788122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:25.456 [2024-12-07 17:25:58.790317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:25.456 [2024-12-07 17:25:58.790378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:25.456 [2024-12-07 17:25:58.790442] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:25.456 [2024-12-07 17:25:58.790500] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:25.456 [2024-12-07 17:25:58.790522] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:25.456 [2024-12-07 17:25:58.790543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.456 [2024-12-07 17:25:58.790554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:25.456 request: 00:09:25.456 { 00:09:25.456 "name": "raid_bdev1", 00:09:25.456 "raid_level": "concat", 00:09:25.456 "base_bdevs": [ 00:09:25.456 "malloc1", 00:09:25.456 "malloc2", 00:09:25.456 "malloc3" 00:09:25.456 ], 00:09:25.456 "strip_size_kb": 64, 00:09:25.456 "superblock": false, 00:09:25.456 "method": "bdev_raid_create", 00:09:25.456 "req_id": 1 00:09:25.456 } 00:09:25.456 Got JSON-RPC error response 00:09:25.456 response: 00:09:25.456 { 00:09:25.456 "code": -17, 00:09:25.456 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:25.456 } 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.456 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.716 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:25.716 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:25.716 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:25.716 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.716 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.716 [2024-12-07 17:25:58.847933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:25.716 [2024-12-07 17:25:58.848055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.716 [2024-12-07 17:25:58.848099] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:25.717 [2024-12-07 17:25:58.848162] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.717 [2024-12-07 17:25:58.850625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.717 [2024-12-07 17:25:58.850709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:25.717 [2024-12-07 17:25:58.850826] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:25.717 [2024-12-07 17:25:58.850915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:25.717 pt1 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.717 "name": "raid_bdev1", 00:09:25.717 "uuid": "0cb36fc2-7e01-40a1-b6bc-6e7eac700f49", 00:09:25.717 "strip_size_kb": 64, 00:09:25.717 "state": "configuring", 00:09:25.717 "raid_level": "concat", 00:09:25.717 "superblock": true, 00:09:25.717 "num_base_bdevs": 3, 00:09:25.717 "num_base_bdevs_discovered": 1, 00:09:25.717 "num_base_bdevs_operational": 3, 00:09:25.717 "base_bdevs_list": [ 00:09:25.717 { 00:09:25.717 "name": "pt1", 00:09:25.717 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.717 "is_configured": true, 00:09:25.717 "data_offset": 2048, 00:09:25.717 "data_size": 63488 00:09:25.717 }, 00:09:25.717 { 00:09:25.717 "name": null, 00:09:25.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.717 "is_configured": false, 00:09:25.717 "data_offset": 2048, 00:09:25.717 "data_size": 63488 00:09:25.717 }, 00:09:25.717 { 00:09:25.717 "name": null, 00:09:25.717 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.717 "is_configured": false, 00:09:25.717 "data_offset": 2048, 00:09:25.717 "data_size": 63488 00:09:25.717 } 00:09:25.717 ] 00:09:25.717 }' 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.717 17:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.977 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:25.977 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.978 [2024-12-07 17:25:59.299263] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:25.978 [2024-12-07 17:25:59.299372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.978 [2024-12-07 17:25:59.299409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:25.978 [2024-12-07 17:25:59.299421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.978 [2024-12-07 17:25:59.300060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.978 [2024-12-07 17:25:59.300084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:25.978 [2024-12-07 17:25:59.300215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:25.978 [2024-12-07 17:25:59.300255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:25.978 pt2 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.978 [2024-12-07 17:25:59.307240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.978 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.237 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.238 "name": "raid_bdev1", 00:09:26.238 "uuid": "0cb36fc2-7e01-40a1-b6bc-6e7eac700f49", 00:09:26.238 "strip_size_kb": 64, 00:09:26.238 "state": "configuring", 00:09:26.238 "raid_level": "concat", 00:09:26.238 "superblock": true, 00:09:26.238 "num_base_bdevs": 3, 00:09:26.238 "num_base_bdevs_discovered": 1, 00:09:26.238 "num_base_bdevs_operational": 3, 00:09:26.238 "base_bdevs_list": [ 00:09:26.238 { 00:09:26.238 "name": "pt1", 00:09:26.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.238 "is_configured": true, 00:09:26.238 "data_offset": 2048, 00:09:26.238 "data_size": 63488 00:09:26.238 }, 00:09:26.238 { 00:09:26.238 "name": null, 00:09:26.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.238 "is_configured": false, 00:09:26.238 "data_offset": 0, 00:09:26.238 "data_size": 63488 00:09:26.238 }, 00:09:26.238 { 00:09:26.238 "name": null, 00:09:26.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.238 "is_configured": false, 00:09:26.238 "data_offset": 2048, 00:09:26.238 "data_size": 63488 00:09:26.238 } 00:09:26.238 ] 00:09:26.238 }' 00:09:26.238 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.238 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.497 [2024-12-07 17:25:59.758532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.497 [2024-12-07 17:25:59.758718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.497 [2024-12-07 17:25:59.758760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:26.497 [2024-12-07 17:25:59.758801] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.497 [2024-12-07 17:25:59.759430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.497 [2024-12-07 17:25:59.759513] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.497 [2024-12-07 17:25:59.759671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:26.497 [2024-12-07 17:25:59.759735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.497 pt2 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.497 [2024-12-07 17:25:59.770464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:26.497 [2024-12-07 17:25:59.770568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.497 [2024-12-07 17:25:59.770602] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:26.497 [2024-12-07 17:25:59.770637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.497 [2024-12-07 17:25:59.771132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.497 [2024-12-07 17:25:59.771204] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:26.497 [2024-12-07 17:25:59.771303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:26.497 [2024-12-07 17:25:59.771362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:26.497 [2024-12-07 17:25:59.771527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:26.497 [2024-12-07 17:25:59.771575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.497 [2024-12-07 17:25:59.771871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:26.497 [2024-12-07 17:25:59.772095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:26.497 [2024-12-07 17:25:59.772141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:26.497 [2024-12-07 17:25:59.772351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.497 pt3 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.497 "name": "raid_bdev1", 00:09:26.497 "uuid": "0cb36fc2-7e01-40a1-b6bc-6e7eac700f49", 00:09:26.497 "strip_size_kb": 64, 00:09:26.497 "state": "online", 00:09:26.497 "raid_level": "concat", 00:09:26.497 "superblock": true, 00:09:26.497 "num_base_bdevs": 3, 00:09:26.497 "num_base_bdevs_discovered": 3, 00:09:26.497 "num_base_bdevs_operational": 3, 00:09:26.497 "base_bdevs_list": [ 00:09:26.497 { 00:09:26.497 "name": "pt1", 00:09:26.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.497 "is_configured": true, 00:09:26.497 "data_offset": 2048, 00:09:26.497 "data_size": 63488 00:09:26.497 }, 00:09:26.497 { 00:09:26.497 "name": "pt2", 00:09:26.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.497 "is_configured": true, 00:09:26.497 "data_offset": 2048, 00:09:26.497 "data_size": 63488 00:09:26.497 }, 00:09:26.497 { 00:09:26.497 "name": "pt3", 00:09:26.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.497 "is_configured": true, 00:09:26.497 "data_offset": 2048, 00:09:26.497 "data_size": 63488 00:09:26.497 } 00:09:26.497 ] 00:09:26.497 }' 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.497 17:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.066 [2024-12-07 17:26:00.230104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.066 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.066 "name": "raid_bdev1", 00:09:27.066 "aliases": [ 00:09:27.066 "0cb36fc2-7e01-40a1-b6bc-6e7eac700f49" 00:09:27.066 ], 00:09:27.066 "product_name": "Raid Volume", 00:09:27.067 "block_size": 512, 00:09:27.067 "num_blocks": 190464, 00:09:27.067 "uuid": "0cb36fc2-7e01-40a1-b6bc-6e7eac700f49", 00:09:27.067 "assigned_rate_limits": { 00:09:27.067 "rw_ios_per_sec": 0, 00:09:27.067 "rw_mbytes_per_sec": 0, 00:09:27.067 "r_mbytes_per_sec": 0, 00:09:27.067 "w_mbytes_per_sec": 0 00:09:27.067 }, 00:09:27.067 "claimed": false, 00:09:27.067 "zoned": false, 00:09:27.067 "supported_io_types": { 00:09:27.067 "read": true, 00:09:27.067 "write": true, 00:09:27.067 "unmap": true, 00:09:27.067 "flush": true, 00:09:27.067 "reset": true, 00:09:27.067 "nvme_admin": false, 00:09:27.067 "nvme_io": false, 00:09:27.067 "nvme_io_md": false, 00:09:27.067 "write_zeroes": true, 00:09:27.067 "zcopy": false, 00:09:27.067 "get_zone_info": false, 00:09:27.067 "zone_management": false, 00:09:27.067 "zone_append": false, 00:09:27.067 "compare": false, 00:09:27.067 "compare_and_write": false, 00:09:27.067 "abort": false, 00:09:27.067 "seek_hole": false, 00:09:27.067 "seek_data": false, 00:09:27.067 "copy": false, 00:09:27.067 "nvme_iov_md": false 00:09:27.067 }, 00:09:27.067 "memory_domains": [ 00:09:27.067 { 00:09:27.067 "dma_device_id": "system", 00:09:27.067 "dma_device_type": 1 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.067 "dma_device_type": 2 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "dma_device_id": "system", 00:09:27.067 "dma_device_type": 1 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.067 "dma_device_type": 2 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "dma_device_id": "system", 00:09:27.067 "dma_device_type": 1 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.067 "dma_device_type": 2 00:09:27.067 } 00:09:27.067 ], 00:09:27.067 "driver_specific": { 00:09:27.067 "raid": { 00:09:27.067 "uuid": "0cb36fc2-7e01-40a1-b6bc-6e7eac700f49", 00:09:27.067 "strip_size_kb": 64, 00:09:27.067 "state": "online", 00:09:27.067 "raid_level": "concat", 00:09:27.067 "superblock": true, 00:09:27.067 "num_base_bdevs": 3, 00:09:27.067 "num_base_bdevs_discovered": 3, 00:09:27.067 "num_base_bdevs_operational": 3, 00:09:27.067 "base_bdevs_list": [ 00:09:27.067 { 00:09:27.067 "name": "pt1", 00:09:27.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.067 "is_configured": true, 00:09:27.067 "data_offset": 2048, 00:09:27.067 "data_size": 63488 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "name": "pt2", 00:09:27.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.067 "is_configured": true, 00:09:27.067 "data_offset": 2048, 00:09:27.067 "data_size": 63488 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "name": "pt3", 00:09:27.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.067 "is_configured": true, 00:09:27.067 "data_offset": 2048, 00:09:27.067 "data_size": 63488 00:09:27.067 } 00:09:27.067 ] 00:09:27.067 } 00:09:27.067 } 00:09:27.067 }' 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:27.067 pt2 00:09:27.067 pt3' 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:27.327 [2024-12-07 17:26:00.461578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0cb36fc2-7e01-40a1-b6bc-6e7eac700f49 '!=' 0cb36fc2-7e01-40a1-b6bc-6e7eac700f49 ']' 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66867 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66867 ']' 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66867 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66867 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66867' 00:09:27.327 killing process with pid 66867 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66867 00:09:27.327 [2024-12-07 17:26:00.535731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.327 17:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66867 00:09:27.327 [2024-12-07 17:26:00.535972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.327 [2024-12-07 17:26:00.536060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.327 [2024-12-07 17:26:00.536076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:27.587 [2024-12-07 17:26:00.868921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.968 17:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:28.968 00:09:28.968 real 0m5.314s 00:09:28.968 user 0m7.570s 00:09:28.968 sys 0m0.900s 00:09:28.968 17:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.968 17:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.968 ************************************ 00:09:28.968 END TEST raid_superblock_test 00:09:28.968 ************************************ 00:09:28.968 17:26:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:28.968 17:26:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:28.968 17:26:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.968 17:26:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.968 ************************************ 00:09:28.968 START TEST raid_read_error_test 00:09:28.968 ************************************ 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.04KJAGl1xr 00:09:28.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67121 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67121 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67121 ']' 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.968 17:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:28.968 [2024-12-07 17:26:02.230128] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:28.968 [2024-12-07 17:26:02.230255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67121 ] 00:09:29.229 [2024-12-07 17:26:02.402347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.229 [2024-12-07 17:26:02.531539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.489 [2024-12-07 17:26:02.763393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.489 [2024-12-07 17:26:02.763449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.748 BaseBdev1_malloc 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.748 true 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.748 [2024-12-07 17:26:03.116886] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:29.748 [2024-12-07 17:26:03.116985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.748 [2024-12-07 17:26:03.117012] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:29.748 [2024-12-07 17:26:03.117027] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.748 [2024-12-07 17:26:03.119464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.748 [2024-12-07 17:26:03.119599] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:29.748 BaseBdev1 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.748 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.007 BaseBdev2_malloc 00:09:30.007 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.007 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:30.007 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.007 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.007 true 00:09:30.007 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.007 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:30.007 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.007 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.007 [2024-12-07 17:26:03.191086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:30.007 [2024-12-07 17:26:03.191163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.007 [2024-12-07 17:26:03.191187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:30.007 [2024-12-07 17:26:03.191201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.007 [2024-12-07 17:26:03.193689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.007 [2024-12-07 17:26:03.193743] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:30.007 BaseBdev2 00:09:30.007 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.008 BaseBdev3_malloc 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.008 true 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.008 [2024-12-07 17:26:03.273490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:30.008 [2024-12-07 17:26:03.273570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.008 [2024-12-07 17:26:03.273594] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:30.008 [2024-12-07 17:26:03.273608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.008 [2024-12-07 17:26:03.276216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.008 [2024-12-07 17:26:03.276265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:30.008 BaseBdev3 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.008 [2024-12-07 17:26:03.281567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.008 [2024-12-07 17:26:03.283727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.008 [2024-12-07 17:26:03.283812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.008 [2024-12-07 17:26:03.284075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:30.008 [2024-12-07 17:26:03.284090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:30.008 [2024-12-07 17:26:03.284357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:30.008 [2024-12-07 17:26:03.284549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:30.008 [2024-12-07 17:26:03.284566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:30.008 [2024-12-07 17:26:03.284740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.008 "name": "raid_bdev1", 00:09:30.008 "uuid": "edcb6f62-bc39-4691-97a1-953742ae6ca7", 00:09:30.008 "strip_size_kb": 64, 00:09:30.008 "state": "online", 00:09:30.008 "raid_level": "concat", 00:09:30.008 "superblock": true, 00:09:30.008 "num_base_bdevs": 3, 00:09:30.008 "num_base_bdevs_discovered": 3, 00:09:30.008 "num_base_bdevs_operational": 3, 00:09:30.008 "base_bdevs_list": [ 00:09:30.008 { 00:09:30.008 "name": "BaseBdev1", 00:09:30.008 "uuid": "2722f23a-7cd1-5703-a1ba-304e0c449491", 00:09:30.008 "is_configured": true, 00:09:30.008 "data_offset": 2048, 00:09:30.008 "data_size": 63488 00:09:30.008 }, 00:09:30.008 { 00:09:30.008 "name": "BaseBdev2", 00:09:30.008 "uuid": "f213a1d9-58f4-5bfa-86ad-dc1b6c1a2af0", 00:09:30.008 "is_configured": true, 00:09:30.008 "data_offset": 2048, 00:09:30.008 "data_size": 63488 00:09:30.008 }, 00:09:30.008 { 00:09:30.008 "name": "BaseBdev3", 00:09:30.008 "uuid": "54dc0933-5d22-5fe9-9380-abee20a14011", 00:09:30.008 "is_configured": true, 00:09:30.008 "data_offset": 2048, 00:09:30.008 "data_size": 63488 00:09:30.008 } 00:09:30.008 ] 00:09:30.008 }' 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.008 17:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.574 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:30.574 17:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:30.574 [2024-12-07 17:26:03.821975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:31.527 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:31.527 17:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.527 17:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.527 17:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.527 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:31.527 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:31.527 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.528 "name": "raid_bdev1", 00:09:31.528 "uuid": "edcb6f62-bc39-4691-97a1-953742ae6ca7", 00:09:31.528 "strip_size_kb": 64, 00:09:31.528 "state": "online", 00:09:31.528 "raid_level": "concat", 00:09:31.528 "superblock": true, 00:09:31.528 "num_base_bdevs": 3, 00:09:31.528 "num_base_bdevs_discovered": 3, 00:09:31.528 "num_base_bdevs_operational": 3, 00:09:31.528 "base_bdevs_list": [ 00:09:31.528 { 00:09:31.528 "name": "BaseBdev1", 00:09:31.528 "uuid": "2722f23a-7cd1-5703-a1ba-304e0c449491", 00:09:31.528 "is_configured": true, 00:09:31.528 "data_offset": 2048, 00:09:31.528 "data_size": 63488 00:09:31.528 }, 00:09:31.528 { 00:09:31.528 "name": "BaseBdev2", 00:09:31.528 "uuid": "f213a1d9-58f4-5bfa-86ad-dc1b6c1a2af0", 00:09:31.528 "is_configured": true, 00:09:31.528 "data_offset": 2048, 00:09:31.528 "data_size": 63488 00:09:31.528 }, 00:09:31.528 { 00:09:31.528 "name": "BaseBdev3", 00:09:31.528 "uuid": "54dc0933-5d22-5fe9-9380-abee20a14011", 00:09:31.528 "is_configured": true, 00:09:31.528 "data_offset": 2048, 00:09:31.528 "data_size": 63488 00:09:31.528 } 00:09:31.528 ] 00:09:31.528 }' 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.528 17:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.096 17:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:32.096 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.096 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.096 [2024-12-07 17:26:05.231445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.096 [2024-12-07 17:26:05.231594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.096 [2024-12-07 17:26:05.234301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.096 [2024-12-07 17:26:05.234404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.096 [2024-12-07 17:26:05.234474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.096 [2024-12-07 17:26:05.234530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:32.096 { 00:09:32.096 "results": [ 00:09:32.096 { 00:09:32.096 "job": "raid_bdev1", 00:09:32.097 "core_mask": "0x1", 00:09:32.097 "workload": "randrw", 00:09:32.097 "percentage": 50, 00:09:32.097 "status": "finished", 00:09:32.097 "queue_depth": 1, 00:09:32.097 "io_size": 131072, 00:09:32.097 "runtime": 1.410159, 00:09:32.097 "iops": 13077.248735780859, 00:09:32.097 "mibps": 1634.6560919726073, 00:09:32.097 "io_failed": 1, 00:09:32.097 "io_timeout": 0, 00:09:32.097 "avg_latency_us": 107.58194419516113, 00:09:32.097 "min_latency_us": 26.717903930131005, 00:09:32.097 "max_latency_us": 1366.5257641921398 00:09:32.097 } 00:09:32.097 ], 00:09:32.097 "core_count": 1 00:09:32.097 } 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67121 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67121 ']' 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67121 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67121 00:09:32.097 killing process with pid 67121 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67121' 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67121 00:09:32.097 [2024-12-07 17:26:05.279789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.097 17:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67121 00:09:32.356 [2024-12-07 17:26:05.536166] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.735 17:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.04KJAGl1xr 00:09:33.735 17:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:33.735 17:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:33.735 17:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:33.735 17:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:33.735 17:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.735 17:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.735 17:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:33.735 00:09:33.735 real 0m4.701s 00:09:33.735 user 0m5.443s 00:09:33.735 sys 0m0.686s 00:09:33.735 17:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.735 17:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.735 ************************************ 00:09:33.735 END TEST raid_read_error_test 00:09:33.735 ************************************ 00:09:33.735 17:26:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:33.735 17:26:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:33.735 17:26:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.735 17:26:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.735 ************************************ 00:09:33.735 START TEST raid_write_error_test 00:09:33.735 ************************************ 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:33.735 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1XeX52hi6G 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67267 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67267 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67267 ']' 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.736 17:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.736 [2024-12-07 17:26:07.023496] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:33.736 [2024-12-07 17:26:07.023632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67267 ] 00:09:33.995 [2024-12-07 17:26:07.204281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.995 [2024-12-07 17:26:07.343437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.254 [2024-12-07 17:26:07.585138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.254 [2024-12-07 17:26:07.585185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.515 BaseBdev1_malloc 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.515 true 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.515 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.515 [2024-12-07 17:26:07.892522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:34.515 [2024-12-07 17:26:07.892606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.515 [2024-12-07 17:26:07.892633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:34.515 [2024-12-07 17:26:07.892647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.776 [2024-12-07 17:26:07.895076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.776 [2024-12-07 17:26:07.895210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:34.776 BaseBdev1 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 BaseBdev2_malloc 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 true 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 [2024-12-07 17:26:07.953277] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:34.776 [2024-12-07 17:26:07.953437] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.776 [2024-12-07 17:26:07.953463] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:34.776 [2024-12-07 17:26:07.953477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.776 [2024-12-07 17:26:07.955849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.776 [2024-12-07 17:26:07.955898] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:34.776 BaseBdev2 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.776 17:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 BaseBdev3_malloc 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 true 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 [2024-12-07 17:26:08.024074] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:34.776 [2024-12-07 17:26:08.024145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.776 [2024-12-07 17:26:08.024167] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:34.776 [2024-12-07 17:26:08.024182] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.776 [2024-12-07 17:26:08.026681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.776 BaseBdev3 00:09:34.776 [2024-12-07 17:26:08.026819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 [2024-12-07 17:26:08.032167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.776 [2024-12-07 17:26:08.034382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.776 [2024-12-07 17:26:08.034468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.776 [2024-12-07 17:26:08.034692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:34.776 [2024-12-07 17:26:08.034708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.776 [2024-12-07 17:26:08.034992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:34.776 [2024-12-07 17:26:08.035180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:34.776 [2024-12-07 17:26:08.035208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:34.776 [2024-12-07 17:26:08.035364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.776 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.776 "name": "raid_bdev1", 00:09:34.776 "uuid": "a6c37d5e-df8e-474b-af5f-955d32418bb8", 00:09:34.776 "strip_size_kb": 64, 00:09:34.776 "state": "online", 00:09:34.776 "raid_level": "concat", 00:09:34.776 "superblock": true, 00:09:34.776 "num_base_bdevs": 3, 00:09:34.776 "num_base_bdevs_discovered": 3, 00:09:34.776 "num_base_bdevs_operational": 3, 00:09:34.776 "base_bdevs_list": [ 00:09:34.776 { 00:09:34.776 "name": "BaseBdev1", 00:09:34.776 "uuid": "6bd1c432-c870-5135-a408-85fc939caae1", 00:09:34.777 "is_configured": true, 00:09:34.777 "data_offset": 2048, 00:09:34.777 "data_size": 63488 00:09:34.777 }, 00:09:34.777 { 00:09:34.777 "name": "BaseBdev2", 00:09:34.777 "uuid": "6efb28ec-99a8-5edb-98ef-32db47ad3d2d", 00:09:34.777 "is_configured": true, 00:09:34.777 "data_offset": 2048, 00:09:34.777 "data_size": 63488 00:09:34.777 }, 00:09:34.777 { 00:09:34.777 "name": "BaseBdev3", 00:09:34.777 "uuid": "7877e42f-2d43-5d88-a89a-417c90226d2b", 00:09:34.777 "is_configured": true, 00:09:34.777 "data_offset": 2048, 00:09:34.777 "data_size": 63488 00:09:34.777 } 00:09:34.777 ] 00:09:34.777 }' 00:09:34.777 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.777 17:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.347 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:35.347 17:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:35.347 [2024-12-07 17:26:08.536673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.287 "name": "raid_bdev1", 00:09:36.287 "uuid": "a6c37d5e-df8e-474b-af5f-955d32418bb8", 00:09:36.287 "strip_size_kb": 64, 00:09:36.287 "state": "online", 00:09:36.287 "raid_level": "concat", 00:09:36.287 "superblock": true, 00:09:36.287 "num_base_bdevs": 3, 00:09:36.287 "num_base_bdevs_discovered": 3, 00:09:36.287 "num_base_bdevs_operational": 3, 00:09:36.287 "base_bdevs_list": [ 00:09:36.287 { 00:09:36.287 "name": "BaseBdev1", 00:09:36.287 "uuid": "6bd1c432-c870-5135-a408-85fc939caae1", 00:09:36.287 "is_configured": true, 00:09:36.287 "data_offset": 2048, 00:09:36.287 "data_size": 63488 00:09:36.287 }, 00:09:36.287 { 00:09:36.287 "name": "BaseBdev2", 00:09:36.287 "uuid": "6efb28ec-99a8-5edb-98ef-32db47ad3d2d", 00:09:36.287 "is_configured": true, 00:09:36.287 "data_offset": 2048, 00:09:36.287 "data_size": 63488 00:09:36.287 }, 00:09:36.287 { 00:09:36.287 "name": "BaseBdev3", 00:09:36.287 "uuid": "7877e42f-2d43-5d88-a89a-417c90226d2b", 00:09:36.287 "is_configured": true, 00:09:36.287 "data_offset": 2048, 00:09:36.287 "data_size": 63488 00:09:36.287 } 00:09:36.287 ] 00:09:36.287 }' 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.287 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.546 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:36.546 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.546 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.546 [2024-12-07 17:26:09.889550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.546 [2024-12-07 17:26:09.889710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.546 [2024-12-07 17:26:09.892444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.546 [2024-12-07 17:26:09.892496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.546 [2024-12-07 17:26:09.892540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.546 [2024-12-07 17:26:09.892553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:36.546 { 00:09:36.546 "results": [ 00:09:36.546 { 00:09:36.546 "job": "raid_bdev1", 00:09:36.546 "core_mask": "0x1", 00:09:36.546 "workload": "randrw", 00:09:36.547 "percentage": 50, 00:09:36.547 "status": "finished", 00:09:36.547 "queue_depth": 1, 00:09:36.547 "io_size": 131072, 00:09:36.547 "runtime": 1.353647, 00:09:36.547 "iops": 13244.959727314434, 00:09:36.547 "mibps": 1655.6199659143042, 00:09:36.547 "io_failed": 1, 00:09:36.547 "io_timeout": 0, 00:09:36.547 "avg_latency_us": 106.06539375592125, 00:09:36.547 "min_latency_us": 26.494323144104804, 00:09:36.547 "max_latency_us": 1423.7624454148472 00:09:36.547 } 00:09:36.547 ], 00:09:36.547 "core_count": 1 00:09:36.547 } 00:09:36.547 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.547 17:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67267 00:09:36.547 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67267 ']' 00:09:36.547 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67267 00:09:36.547 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:36.547 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.547 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67267 00:09:36.805 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.805 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.805 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67267' 00:09:36.805 killing process with pid 67267 00:09:36.805 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67267 00:09:36.805 [2024-12-07 17:26:09.938537] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.805 17:26:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67267 00:09:36.805 [2024-12-07 17:26:10.182409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.190 17:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1XeX52hi6G 00:09:38.190 17:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:38.190 17:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:38.190 17:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:38.190 17:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:38.190 17:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.190 17:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.190 17:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:38.190 00:09:38.190 real 0m4.571s 00:09:38.190 user 0m5.222s 00:09:38.190 sys 0m0.659s 00:09:38.190 17:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.190 17:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.190 ************************************ 00:09:38.190 END TEST raid_write_error_test 00:09:38.190 ************************************ 00:09:38.190 17:26:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:38.190 17:26:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:38.190 17:26:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:38.190 17:26:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.190 17:26:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.190 ************************************ 00:09:38.190 START TEST raid_state_function_test 00:09:38.190 ************************************ 00:09:38.190 17:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:38.190 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:38.190 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67410 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67410' 00:09:38.191 Process raid pid: 67410 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67410 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67410 ']' 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.191 17:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.450 [2024-12-07 17:26:11.643202] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:38.450 [2024-12-07 17:26:11.643375] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.450 [2024-12-07 17:26:11.797820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.709 [2024-12-07 17:26:11.938872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.968 [2024-12-07 17:26:12.186262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.968 [2024-12-07 17:26:12.186445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.227 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.227 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:39.227 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.227 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.227 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.227 [2024-12-07 17:26:12.469740] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.227 [2024-12-07 17:26:12.469953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.227 [2024-12-07 17:26:12.469997] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.227 [2024-12-07 17:26:12.470027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.227 [2024-12-07 17:26:12.470059] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.227 [2024-12-07 17:26:12.470091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.227 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.227 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.228 "name": "Existed_Raid", 00:09:39.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.228 "strip_size_kb": 0, 00:09:39.228 "state": "configuring", 00:09:39.228 "raid_level": "raid1", 00:09:39.228 "superblock": false, 00:09:39.228 "num_base_bdevs": 3, 00:09:39.228 "num_base_bdevs_discovered": 0, 00:09:39.228 "num_base_bdevs_operational": 3, 00:09:39.228 "base_bdevs_list": [ 00:09:39.228 { 00:09:39.228 "name": "BaseBdev1", 00:09:39.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.228 "is_configured": false, 00:09:39.228 "data_offset": 0, 00:09:39.228 "data_size": 0 00:09:39.228 }, 00:09:39.228 { 00:09:39.228 "name": "BaseBdev2", 00:09:39.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.228 "is_configured": false, 00:09:39.228 "data_offset": 0, 00:09:39.228 "data_size": 0 00:09:39.228 }, 00:09:39.228 { 00:09:39.228 "name": "BaseBdev3", 00:09:39.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.228 "is_configured": false, 00:09:39.228 "data_offset": 0, 00:09:39.228 "data_size": 0 00:09:39.228 } 00:09:39.228 ] 00:09:39.228 }' 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.228 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.488 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.488 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.488 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.488 [2024-12-07 17:26:12.821128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.488 [2024-12-07 17:26:12.821271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:39.488 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.488 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.488 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.488 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.488 [2024-12-07 17:26:12.829106] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.489 [2024-12-07 17:26:12.829164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.489 [2024-12-07 17:26:12.829175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.489 [2024-12-07 17:26:12.829189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.489 [2024-12-07 17:26:12.829197] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.489 [2024-12-07 17:26:12.829209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.489 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.489 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.489 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.489 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.748 [2024-12-07 17:26:12.880461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.748 BaseBdev1 00:09:39.748 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.748 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:39.748 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.749 [ 00:09:39.749 { 00:09:39.749 "name": "BaseBdev1", 00:09:39.749 "aliases": [ 00:09:39.749 "6a16463c-3d50-496d-92ce-9de65f4909ab" 00:09:39.749 ], 00:09:39.749 "product_name": "Malloc disk", 00:09:39.749 "block_size": 512, 00:09:39.749 "num_blocks": 65536, 00:09:39.749 "uuid": "6a16463c-3d50-496d-92ce-9de65f4909ab", 00:09:39.749 "assigned_rate_limits": { 00:09:39.749 "rw_ios_per_sec": 0, 00:09:39.749 "rw_mbytes_per_sec": 0, 00:09:39.749 "r_mbytes_per_sec": 0, 00:09:39.749 "w_mbytes_per_sec": 0 00:09:39.749 }, 00:09:39.749 "claimed": true, 00:09:39.749 "claim_type": "exclusive_write", 00:09:39.749 "zoned": false, 00:09:39.749 "supported_io_types": { 00:09:39.749 "read": true, 00:09:39.749 "write": true, 00:09:39.749 "unmap": true, 00:09:39.749 "flush": true, 00:09:39.749 "reset": true, 00:09:39.749 "nvme_admin": false, 00:09:39.749 "nvme_io": false, 00:09:39.749 "nvme_io_md": false, 00:09:39.749 "write_zeroes": true, 00:09:39.749 "zcopy": true, 00:09:39.749 "get_zone_info": false, 00:09:39.749 "zone_management": false, 00:09:39.749 "zone_append": false, 00:09:39.749 "compare": false, 00:09:39.749 "compare_and_write": false, 00:09:39.749 "abort": true, 00:09:39.749 "seek_hole": false, 00:09:39.749 "seek_data": false, 00:09:39.749 "copy": true, 00:09:39.749 "nvme_iov_md": false 00:09:39.749 }, 00:09:39.749 "memory_domains": [ 00:09:39.749 { 00:09:39.749 "dma_device_id": "system", 00:09:39.749 "dma_device_type": 1 00:09:39.749 }, 00:09:39.749 { 00:09:39.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.749 "dma_device_type": 2 00:09:39.749 } 00:09:39.749 ], 00:09:39.749 "driver_specific": {} 00:09:39.749 } 00:09:39.749 ] 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.749 "name": "Existed_Raid", 00:09:39.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.749 "strip_size_kb": 0, 00:09:39.749 "state": "configuring", 00:09:39.749 "raid_level": "raid1", 00:09:39.749 "superblock": false, 00:09:39.749 "num_base_bdevs": 3, 00:09:39.749 "num_base_bdevs_discovered": 1, 00:09:39.749 "num_base_bdevs_operational": 3, 00:09:39.749 "base_bdevs_list": [ 00:09:39.749 { 00:09:39.749 "name": "BaseBdev1", 00:09:39.749 "uuid": "6a16463c-3d50-496d-92ce-9de65f4909ab", 00:09:39.749 "is_configured": true, 00:09:39.749 "data_offset": 0, 00:09:39.749 "data_size": 65536 00:09:39.749 }, 00:09:39.749 { 00:09:39.749 "name": "BaseBdev2", 00:09:39.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.749 "is_configured": false, 00:09:39.749 "data_offset": 0, 00:09:39.749 "data_size": 0 00:09:39.749 }, 00:09:39.749 { 00:09:39.749 "name": "BaseBdev3", 00:09:39.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.749 "is_configured": false, 00:09:39.749 "data_offset": 0, 00:09:39.749 "data_size": 0 00:09:39.749 } 00:09:39.749 ] 00:09:39.749 }' 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.749 17:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.085 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.086 [2024-12-07 17:26:13.367726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.086 [2024-12-07 17:26:13.367814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.086 [2024-12-07 17:26:13.379706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.086 [2024-12-07 17:26:13.381970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.086 [2024-12-07 17:26:13.382068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.086 [2024-12-07 17:26:13.382103] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.086 [2024-12-07 17:26:13.382131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.086 "name": "Existed_Raid", 00:09:40.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.086 "strip_size_kb": 0, 00:09:40.086 "state": "configuring", 00:09:40.086 "raid_level": "raid1", 00:09:40.086 "superblock": false, 00:09:40.086 "num_base_bdevs": 3, 00:09:40.086 "num_base_bdevs_discovered": 1, 00:09:40.086 "num_base_bdevs_operational": 3, 00:09:40.086 "base_bdevs_list": [ 00:09:40.086 { 00:09:40.086 "name": "BaseBdev1", 00:09:40.086 "uuid": "6a16463c-3d50-496d-92ce-9de65f4909ab", 00:09:40.086 "is_configured": true, 00:09:40.086 "data_offset": 0, 00:09:40.086 "data_size": 65536 00:09:40.086 }, 00:09:40.086 { 00:09:40.086 "name": "BaseBdev2", 00:09:40.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.086 "is_configured": false, 00:09:40.086 "data_offset": 0, 00:09:40.086 "data_size": 0 00:09:40.086 }, 00:09:40.086 { 00:09:40.086 "name": "BaseBdev3", 00:09:40.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.086 "is_configured": false, 00:09:40.086 "data_offset": 0, 00:09:40.086 "data_size": 0 00:09:40.086 } 00:09:40.086 ] 00:09:40.086 }' 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.086 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.656 [2024-12-07 17:26:13.889829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.656 BaseBdev2 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.656 [ 00:09:40.656 { 00:09:40.656 "name": "BaseBdev2", 00:09:40.656 "aliases": [ 00:09:40.656 "ef46bc90-f1bd-4df2-be3e-0e4e19fb5295" 00:09:40.656 ], 00:09:40.656 "product_name": "Malloc disk", 00:09:40.656 "block_size": 512, 00:09:40.656 "num_blocks": 65536, 00:09:40.656 "uuid": "ef46bc90-f1bd-4df2-be3e-0e4e19fb5295", 00:09:40.656 "assigned_rate_limits": { 00:09:40.656 "rw_ios_per_sec": 0, 00:09:40.656 "rw_mbytes_per_sec": 0, 00:09:40.656 "r_mbytes_per_sec": 0, 00:09:40.656 "w_mbytes_per_sec": 0 00:09:40.656 }, 00:09:40.656 "claimed": true, 00:09:40.656 "claim_type": "exclusive_write", 00:09:40.656 "zoned": false, 00:09:40.656 "supported_io_types": { 00:09:40.656 "read": true, 00:09:40.656 "write": true, 00:09:40.656 "unmap": true, 00:09:40.656 "flush": true, 00:09:40.656 "reset": true, 00:09:40.656 "nvme_admin": false, 00:09:40.656 "nvme_io": false, 00:09:40.656 "nvme_io_md": false, 00:09:40.656 "write_zeroes": true, 00:09:40.656 "zcopy": true, 00:09:40.656 "get_zone_info": false, 00:09:40.656 "zone_management": false, 00:09:40.656 "zone_append": false, 00:09:40.656 "compare": false, 00:09:40.656 "compare_and_write": false, 00:09:40.656 "abort": true, 00:09:40.656 "seek_hole": false, 00:09:40.656 "seek_data": false, 00:09:40.656 "copy": true, 00:09:40.656 "nvme_iov_md": false 00:09:40.656 }, 00:09:40.656 "memory_domains": [ 00:09:40.656 { 00:09:40.656 "dma_device_id": "system", 00:09:40.656 "dma_device_type": 1 00:09:40.656 }, 00:09:40.656 { 00:09:40.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.656 "dma_device_type": 2 00:09:40.656 } 00:09:40.656 ], 00:09:40.656 "driver_specific": {} 00:09:40.656 } 00:09:40.656 ] 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.656 "name": "Existed_Raid", 00:09:40.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.656 "strip_size_kb": 0, 00:09:40.656 "state": "configuring", 00:09:40.656 "raid_level": "raid1", 00:09:40.656 "superblock": false, 00:09:40.656 "num_base_bdevs": 3, 00:09:40.656 "num_base_bdevs_discovered": 2, 00:09:40.656 "num_base_bdevs_operational": 3, 00:09:40.656 "base_bdevs_list": [ 00:09:40.656 { 00:09:40.656 "name": "BaseBdev1", 00:09:40.656 "uuid": "6a16463c-3d50-496d-92ce-9de65f4909ab", 00:09:40.656 "is_configured": true, 00:09:40.656 "data_offset": 0, 00:09:40.656 "data_size": 65536 00:09:40.656 }, 00:09:40.656 { 00:09:40.656 "name": "BaseBdev2", 00:09:40.656 "uuid": "ef46bc90-f1bd-4df2-be3e-0e4e19fb5295", 00:09:40.656 "is_configured": true, 00:09:40.656 "data_offset": 0, 00:09:40.656 "data_size": 65536 00:09:40.656 }, 00:09:40.656 { 00:09:40.656 "name": "BaseBdev3", 00:09:40.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.656 "is_configured": false, 00:09:40.656 "data_offset": 0, 00:09:40.656 "data_size": 0 00:09:40.656 } 00:09:40.656 ] 00:09:40.656 }' 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.656 17:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.226 [2024-12-07 17:26:14.459209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.226 [2024-12-07 17:26:14.459271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:41.226 [2024-12-07 17:26:14.459287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:41.226 [2024-12-07 17:26:14.459607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:41.226 [2024-12-07 17:26:14.459807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:41.226 [2024-12-07 17:26:14.459817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:41.226 [2024-12-07 17:26:14.460133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.226 BaseBdev3 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.226 [ 00:09:41.226 { 00:09:41.226 "name": "BaseBdev3", 00:09:41.226 "aliases": [ 00:09:41.226 "1bf1a9bb-8175-42c0-9d1a-8df3fc683eaf" 00:09:41.226 ], 00:09:41.226 "product_name": "Malloc disk", 00:09:41.226 "block_size": 512, 00:09:41.226 "num_blocks": 65536, 00:09:41.226 "uuid": "1bf1a9bb-8175-42c0-9d1a-8df3fc683eaf", 00:09:41.226 "assigned_rate_limits": { 00:09:41.226 "rw_ios_per_sec": 0, 00:09:41.226 "rw_mbytes_per_sec": 0, 00:09:41.226 "r_mbytes_per_sec": 0, 00:09:41.226 "w_mbytes_per_sec": 0 00:09:41.226 }, 00:09:41.226 "claimed": true, 00:09:41.226 "claim_type": "exclusive_write", 00:09:41.226 "zoned": false, 00:09:41.226 "supported_io_types": { 00:09:41.226 "read": true, 00:09:41.226 "write": true, 00:09:41.226 "unmap": true, 00:09:41.226 "flush": true, 00:09:41.226 "reset": true, 00:09:41.226 "nvme_admin": false, 00:09:41.226 "nvme_io": false, 00:09:41.226 "nvme_io_md": false, 00:09:41.226 "write_zeroes": true, 00:09:41.226 "zcopy": true, 00:09:41.226 "get_zone_info": false, 00:09:41.226 "zone_management": false, 00:09:41.226 "zone_append": false, 00:09:41.226 "compare": false, 00:09:41.226 "compare_and_write": false, 00:09:41.226 "abort": true, 00:09:41.226 "seek_hole": false, 00:09:41.226 "seek_data": false, 00:09:41.226 "copy": true, 00:09:41.226 "nvme_iov_md": false 00:09:41.226 }, 00:09:41.226 "memory_domains": [ 00:09:41.226 { 00:09:41.226 "dma_device_id": "system", 00:09:41.226 "dma_device_type": 1 00:09:41.226 }, 00:09:41.226 { 00:09:41.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.226 "dma_device_type": 2 00:09:41.226 } 00:09:41.226 ], 00:09:41.226 "driver_specific": {} 00:09:41.226 } 00:09:41.226 ] 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.226 "name": "Existed_Raid", 00:09:41.226 "uuid": "80e32052-06d7-48a8-924d-85b44ac99819", 00:09:41.226 "strip_size_kb": 0, 00:09:41.226 "state": "online", 00:09:41.226 "raid_level": "raid1", 00:09:41.226 "superblock": false, 00:09:41.226 "num_base_bdevs": 3, 00:09:41.226 "num_base_bdevs_discovered": 3, 00:09:41.226 "num_base_bdevs_operational": 3, 00:09:41.226 "base_bdevs_list": [ 00:09:41.226 { 00:09:41.226 "name": "BaseBdev1", 00:09:41.226 "uuid": "6a16463c-3d50-496d-92ce-9de65f4909ab", 00:09:41.226 "is_configured": true, 00:09:41.226 "data_offset": 0, 00:09:41.226 "data_size": 65536 00:09:41.226 }, 00:09:41.226 { 00:09:41.226 "name": "BaseBdev2", 00:09:41.226 "uuid": "ef46bc90-f1bd-4df2-be3e-0e4e19fb5295", 00:09:41.226 "is_configured": true, 00:09:41.226 "data_offset": 0, 00:09:41.226 "data_size": 65536 00:09:41.226 }, 00:09:41.226 { 00:09:41.226 "name": "BaseBdev3", 00:09:41.226 "uuid": "1bf1a9bb-8175-42c0-9d1a-8df3fc683eaf", 00:09:41.226 "is_configured": true, 00:09:41.226 "data_offset": 0, 00:09:41.226 "data_size": 65536 00:09:41.226 } 00:09:41.226 ] 00:09:41.226 }' 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.226 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.795 [2024-12-07 17:26:14.966717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.795 17:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.795 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.795 "name": "Existed_Raid", 00:09:41.795 "aliases": [ 00:09:41.795 "80e32052-06d7-48a8-924d-85b44ac99819" 00:09:41.795 ], 00:09:41.795 "product_name": "Raid Volume", 00:09:41.795 "block_size": 512, 00:09:41.795 "num_blocks": 65536, 00:09:41.795 "uuid": "80e32052-06d7-48a8-924d-85b44ac99819", 00:09:41.795 "assigned_rate_limits": { 00:09:41.795 "rw_ios_per_sec": 0, 00:09:41.795 "rw_mbytes_per_sec": 0, 00:09:41.795 "r_mbytes_per_sec": 0, 00:09:41.795 "w_mbytes_per_sec": 0 00:09:41.795 }, 00:09:41.795 "claimed": false, 00:09:41.795 "zoned": false, 00:09:41.795 "supported_io_types": { 00:09:41.795 "read": true, 00:09:41.795 "write": true, 00:09:41.795 "unmap": false, 00:09:41.795 "flush": false, 00:09:41.795 "reset": true, 00:09:41.795 "nvme_admin": false, 00:09:41.796 "nvme_io": false, 00:09:41.796 "nvme_io_md": false, 00:09:41.796 "write_zeroes": true, 00:09:41.796 "zcopy": false, 00:09:41.796 "get_zone_info": false, 00:09:41.796 "zone_management": false, 00:09:41.796 "zone_append": false, 00:09:41.796 "compare": false, 00:09:41.796 "compare_and_write": false, 00:09:41.796 "abort": false, 00:09:41.796 "seek_hole": false, 00:09:41.796 "seek_data": false, 00:09:41.796 "copy": false, 00:09:41.796 "nvme_iov_md": false 00:09:41.796 }, 00:09:41.796 "memory_domains": [ 00:09:41.796 { 00:09:41.796 "dma_device_id": "system", 00:09:41.796 "dma_device_type": 1 00:09:41.796 }, 00:09:41.796 { 00:09:41.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.796 "dma_device_type": 2 00:09:41.796 }, 00:09:41.796 { 00:09:41.796 "dma_device_id": "system", 00:09:41.796 "dma_device_type": 1 00:09:41.796 }, 00:09:41.796 { 00:09:41.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.796 "dma_device_type": 2 00:09:41.796 }, 00:09:41.796 { 00:09:41.796 "dma_device_id": "system", 00:09:41.796 "dma_device_type": 1 00:09:41.796 }, 00:09:41.796 { 00:09:41.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.796 "dma_device_type": 2 00:09:41.796 } 00:09:41.796 ], 00:09:41.796 "driver_specific": { 00:09:41.796 "raid": { 00:09:41.796 "uuid": "80e32052-06d7-48a8-924d-85b44ac99819", 00:09:41.796 "strip_size_kb": 0, 00:09:41.796 "state": "online", 00:09:41.796 "raid_level": "raid1", 00:09:41.796 "superblock": false, 00:09:41.796 "num_base_bdevs": 3, 00:09:41.796 "num_base_bdevs_discovered": 3, 00:09:41.796 "num_base_bdevs_operational": 3, 00:09:41.796 "base_bdevs_list": [ 00:09:41.796 { 00:09:41.796 "name": "BaseBdev1", 00:09:41.796 "uuid": "6a16463c-3d50-496d-92ce-9de65f4909ab", 00:09:41.796 "is_configured": true, 00:09:41.796 "data_offset": 0, 00:09:41.796 "data_size": 65536 00:09:41.796 }, 00:09:41.796 { 00:09:41.796 "name": "BaseBdev2", 00:09:41.796 "uuid": "ef46bc90-f1bd-4df2-be3e-0e4e19fb5295", 00:09:41.796 "is_configured": true, 00:09:41.796 "data_offset": 0, 00:09:41.796 "data_size": 65536 00:09:41.796 }, 00:09:41.796 { 00:09:41.796 "name": "BaseBdev3", 00:09:41.796 "uuid": "1bf1a9bb-8175-42c0-9d1a-8df3fc683eaf", 00:09:41.796 "is_configured": true, 00:09:41.796 "data_offset": 0, 00:09:41.796 "data_size": 65536 00:09:41.796 } 00:09:41.796 ] 00:09:41.796 } 00:09:41.796 } 00:09:41.796 }' 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:41.796 BaseBdev2 00:09:41.796 BaseBdev3' 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.796 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.055 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.055 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.055 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.055 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.055 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.055 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.055 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.055 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.055 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.055 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.056 [2024-12-07 17:26:15.262048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.056 "name": "Existed_Raid", 00:09:42.056 "uuid": "80e32052-06d7-48a8-924d-85b44ac99819", 00:09:42.056 "strip_size_kb": 0, 00:09:42.056 "state": "online", 00:09:42.056 "raid_level": "raid1", 00:09:42.056 "superblock": false, 00:09:42.056 "num_base_bdevs": 3, 00:09:42.056 "num_base_bdevs_discovered": 2, 00:09:42.056 "num_base_bdevs_operational": 2, 00:09:42.056 "base_bdevs_list": [ 00:09:42.056 { 00:09:42.056 "name": null, 00:09:42.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.056 "is_configured": false, 00:09:42.056 "data_offset": 0, 00:09:42.056 "data_size": 65536 00:09:42.056 }, 00:09:42.056 { 00:09:42.056 "name": "BaseBdev2", 00:09:42.056 "uuid": "ef46bc90-f1bd-4df2-be3e-0e4e19fb5295", 00:09:42.056 "is_configured": true, 00:09:42.056 "data_offset": 0, 00:09:42.056 "data_size": 65536 00:09:42.056 }, 00:09:42.056 { 00:09:42.056 "name": "BaseBdev3", 00:09:42.056 "uuid": "1bf1a9bb-8175-42c0-9d1a-8df3fc683eaf", 00:09:42.056 "is_configured": true, 00:09:42.056 "data_offset": 0, 00:09:42.056 "data_size": 65536 00:09:42.056 } 00:09:42.056 ] 00:09:42.056 }' 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.056 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.624 17:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.624 [2024-12-07 17:26:15.897551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.624 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.624 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.624 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.883 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.883 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.883 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.883 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.883 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.883 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.883 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.883 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.884 [2024-12-07 17:26:16.052333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:42.884 [2024-12-07 17:26:16.052469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.884 [2024-12-07 17:26:16.155686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.884 [2024-12-07 17:26:16.155748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.884 [2024-12-07 17:26:16.155764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.884 BaseBdev2 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.884 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.143 [ 00:09:43.143 { 00:09:43.143 "name": "BaseBdev2", 00:09:43.143 "aliases": [ 00:09:43.143 "0e803ad2-0937-43d6-b979-f41b5322092a" 00:09:43.143 ], 00:09:43.143 "product_name": "Malloc disk", 00:09:43.143 "block_size": 512, 00:09:43.143 "num_blocks": 65536, 00:09:43.143 "uuid": "0e803ad2-0937-43d6-b979-f41b5322092a", 00:09:43.143 "assigned_rate_limits": { 00:09:43.143 "rw_ios_per_sec": 0, 00:09:43.143 "rw_mbytes_per_sec": 0, 00:09:43.143 "r_mbytes_per_sec": 0, 00:09:43.143 "w_mbytes_per_sec": 0 00:09:43.143 }, 00:09:43.143 "claimed": false, 00:09:43.143 "zoned": false, 00:09:43.143 "supported_io_types": { 00:09:43.143 "read": true, 00:09:43.143 "write": true, 00:09:43.143 "unmap": true, 00:09:43.143 "flush": true, 00:09:43.143 "reset": true, 00:09:43.143 "nvme_admin": false, 00:09:43.143 "nvme_io": false, 00:09:43.143 "nvme_io_md": false, 00:09:43.143 "write_zeroes": true, 00:09:43.143 "zcopy": true, 00:09:43.143 "get_zone_info": false, 00:09:43.143 "zone_management": false, 00:09:43.143 "zone_append": false, 00:09:43.143 "compare": false, 00:09:43.143 "compare_and_write": false, 00:09:43.143 "abort": true, 00:09:43.143 "seek_hole": false, 00:09:43.143 "seek_data": false, 00:09:43.143 "copy": true, 00:09:43.143 "nvme_iov_md": false 00:09:43.143 }, 00:09:43.143 "memory_domains": [ 00:09:43.143 { 00:09:43.143 "dma_device_id": "system", 00:09:43.143 "dma_device_type": 1 00:09:43.143 }, 00:09:43.143 { 00:09:43.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.143 "dma_device_type": 2 00:09:43.143 } 00:09:43.143 ], 00:09:43.143 "driver_specific": {} 00:09:43.143 } 00:09:43.143 ] 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.143 BaseBdev3 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.143 [ 00:09:43.143 { 00:09:43.143 "name": "BaseBdev3", 00:09:43.143 "aliases": [ 00:09:43.143 "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3" 00:09:43.143 ], 00:09:43.143 "product_name": "Malloc disk", 00:09:43.143 "block_size": 512, 00:09:43.143 "num_blocks": 65536, 00:09:43.143 "uuid": "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3", 00:09:43.143 "assigned_rate_limits": { 00:09:43.143 "rw_ios_per_sec": 0, 00:09:43.143 "rw_mbytes_per_sec": 0, 00:09:43.143 "r_mbytes_per_sec": 0, 00:09:43.143 "w_mbytes_per_sec": 0 00:09:43.143 }, 00:09:43.143 "claimed": false, 00:09:43.143 "zoned": false, 00:09:43.143 "supported_io_types": { 00:09:43.143 "read": true, 00:09:43.143 "write": true, 00:09:43.143 "unmap": true, 00:09:43.143 "flush": true, 00:09:43.143 "reset": true, 00:09:43.143 "nvme_admin": false, 00:09:43.143 "nvme_io": false, 00:09:43.143 "nvme_io_md": false, 00:09:43.143 "write_zeroes": true, 00:09:43.143 "zcopy": true, 00:09:43.143 "get_zone_info": false, 00:09:43.143 "zone_management": false, 00:09:43.143 "zone_append": false, 00:09:43.143 "compare": false, 00:09:43.143 "compare_and_write": false, 00:09:43.143 "abort": true, 00:09:43.143 "seek_hole": false, 00:09:43.143 "seek_data": false, 00:09:43.143 "copy": true, 00:09:43.143 "nvme_iov_md": false 00:09:43.143 }, 00:09:43.143 "memory_domains": [ 00:09:43.143 { 00:09:43.143 "dma_device_id": "system", 00:09:43.143 "dma_device_type": 1 00:09:43.143 }, 00:09:43.143 { 00:09:43.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.143 "dma_device_type": 2 00:09:43.143 } 00:09:43.143 ], 00:09:43.143 "driver_specific": {} 00:09:43.143 } 00:09:43.143 ] 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.143 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.143 [2024-12-07 17:26:16.381325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.143 [2024-12-07 17:26:16.381489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.144 [2024-12-07 17:26:16.381536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.144 [2024-12-07 17:26:16.383729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.144 "name": "Existed_Raid", 00:09:43.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.144 "strip_size_kb": 0, 00:09:43.144 "state": "configuring", 00:09:43.144 "raid_level": "raid1", 00:09:43.144 "superblock": false, 00:09:43.144 "num_base_bdevs": 3, 00:09:43.144 "num_base_bdevs_discovered": 2, 00:09:43.144 "num_base_bdevs_operational": 3, 00:09:43.144 "base_bdevs_list": [ 00:09:43.144 { 00:09:43.144 "name": "BaseBdev1", 00:09:43.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.144 "is_configured": false, 00:09:43.144 "data_offset": 0, 00:09:43.144 "data_size": 0 00:09:43.144 }, 00:09:43.144 { 00:09:43.144 "name": "BaseBdev2", 00:09:43.144 "uuid": "0e803ad2-0937-43d6-b979-f41b5322092a", 00:09:43.144 "is_configured": true, 00:09:43.144 "data_offset": 0, 00:09:43.144 "data_size": 65536 00:09:43.144 }, 00:09:43.144 { 00:09:43.144 "name": "BaseBdev3", 00:09:43.144 "uuid": "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3", 00:09:43.144 "is_configured": true, 00:09:43.144 "data_offset": 0, 00:09:43.144 "data_size": 65536 00:09:43.144 } 00:09:43.144 ] 00:09:43.144 }' 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.144 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.712 [2024-12-07 17:26:16.836548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.712 "name": "Existed_Raid", 00:09:43.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.712 "strip_size_kb": 0, 00:09:43.712 "state": "configuring", 00:09:43.712 "raid_level": "raid1", 00:09:43.712 "superblock": false, 00:09:43.712 "num_base_bdevs": 3, 00:09:43.712 "num_base_bdevs_discovered": 1, 00:09:43.712 "num_base_bdevs_operational": 3, 00:09:43.712 "base_bdevs_list": [ 00:09:43.712 { 00:09:43.712 "name": "BaseBdev1", 00:09:43.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.712 "is_configured": false, 00:09:43.712 "data_offset": 0, 00:09:43.712 "data_size": 0 00:09:43.712 }, 00:09:43.712 { 00:09:43.712 "name": null, 00:09:43.712 "uuid": "0e803ad2-0937-43d6-b979-f41b5322092a", 00:09:43.712 "is_configured": false, 00:09:43.712 "data_offset": 0, 00:09:43.712 "data_size": 65536 00:09:43.712 }, 00:09:43.712 { 00:09:43.712 "name": "BaseBdev3", 00:09:43.712 "uuid": "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3", 00:09:43.712 "is_configured": true, 00:09:43.712 "data_offset": 0, 00:09:43.712 "data_size": 65536 00:09:43.712 } 00:09:43.712 ] 00:09:43.712 }' 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.712 17:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.971 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.971 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.971 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.971 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:43.971 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.971 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:43.971 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:43.971 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.971 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.230 [2024-12-07 17:26:17.362258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.230 BaseBdev1 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.230 [ 00:09:44.230 { 00:09:44.230 "name": "BaseBdev1", 00:09:44.230 "aliases": [ 00:09:44.230 "a8d3742a-c728-4c9f-bbbf-5434794ab965" 00:09:44.230 ], 00:09:44.230 "product_name": "Malloc disk", 00:09:44.230 "block_size": 512, 00:09:44.230 "num_blocks": 65536, 00:09:44.230 "uuid": "a8d3742a-c728-4c9f-bbbf-5434794ab965", 00:09:44.230 "assigned_rate_limits": { 00:09:44.230 "rw_ios_per_sec": 0, 00:09:44.230 "rw_mbytes_per_sec": 0, 00:09:44.230 "r_mbytes_per_sec": 0, 00:09:44.230 "w_mbytes_per_sec": 0 00:09:44.230 }, 00:09:44.230 "claimed": true, 00:09:44.230 "claim_type": "exclusive_write", 00:09:44.230 "zoned": false, 00:09:44.230 "supported_io_types": { 00:09:44.230 "read": true, 00:09:44.230 "write": true, 00:09:44.230 "unmap": true, 00:09:44.230 "flush": true, 00:09:44.230 "reset": true, 00:09:44.230 "nvme_admin": false, 00:09:44.230 "nvme_io": false, 00:09:44.230 "nvme_io_md": false, 00:09:44.230 "write_zeroes": true, 00:09:44.230 "zcopy": true, 00:09:44.230 "get_zone_info": false, 00:09:44.230 "zone_management": false, 00:09:44.230 "zone_append": false, 00:09:44.230 "compare": false, 00:09:44.230 "compare_and_write": false, 00:09:44.230 "abort": true, 00:09:44.230 "seek_hole": false, 00:09:44.230 "seek_data": false, 00:09:44.230 "copy": true, 00:09:44.230 "nvme_iov_md": false 00:09:44.230 }, 00:09:44.230 "memory_domains": [ 00:09:44.230 { 00:09:44.230 "dma_device_id": "system", 00:09:44.230 "dma_device_type": 1 00:09:44.230 }, 00:09:44.230 { 00:09:44.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.230 "dma_device_type": 2 00:09:44.230 } 00:09:44.230 ], 00:09:44.230 "driver_specific": {} 00:09:44.230 } 00:09:44.230 ] 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.230 "name": "Existed_Raid", 00:09:44.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.230 "strip_size_kb": 0, 00:09:44.230 "state": "configuring", 00:09:44.230 "raid_level": "raid1", 00:09:44.230 "superblock": false, 00:09:44.230 "num_base_bdevs": 3, 00:09:44.230 "num_base_bdevs_discovered": 2, 00:09:44.230 "num_base_bdevs_operational": 3, 00:09:44.230 "base_bdevs_list": [ 00:09:44.230 { 00:09:44.230 "name": "BaseBdev1", 00:09:44.230 "uuid": "a8d3742a-c728-4c9f-bbbf-5434794ab965", 00:09:44.230 "is_configured": true, 00:09:44.230 "data_offset": 0, 00:09:44.230 "data_size": 65536 00:09:44.230 }, 00:09:44.230 { 00:09:44.230 "name": null, 00:09:44.230 "uuid": "0e803ad2-0937-43d6-b979-f41b5322092a", 00:09:44.230 "is_configured": false, 00:09:44.230 "data_offset": 0, 00:09:44.230 "data_size": 65536 00:09:44.230 }, 00:09:44.230 { 00:09:44.230 "name": "BaseBdev3", 00:09:44.230 "uuid": "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3", 00:09:44.230 "is_configured": true, 00:09:44.230 "data_offset": 0, 00:09:44.230 "data_size": 65536 00:09:44.230 } 00:09:44.230 ] 00:09:44.230 }' 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.230 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.489 [2024-12-07 17:26:17.853550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.489 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.748 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.748 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.748 "name": "Existed_Raid", 00:09:44.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.748 "strip_size_kb": 0, 00:09:44.748 "state": "configuring", 00:09:44.748 "raid_level": "raid1", 00:09:44.748 "superblock": false, 00:09:44.748 "num_base_bdevs": 3, 00:09:44.748 "num_base_bdevs_discovered": 1, 00:09:44.748 "num_base_bdevs_operational": 3, 00:09:44.748 "base_bdevs_list": [ 00:09:44.748 { 00:09:44.748 "name": "BaseBdev1", 00:09:44.748 "uuid": "a8d3742a-c728-4c9f-bbbf-5434794ab965", 00:09:44.748 "is_configured": true, 00:09:44.748 "data_offset": 0, 00:09:44.748 "data_size": 65536 00:09:44.748 }, 00:09:44.748 { 00:09:44.748 "name": null, 00:09:44.748 "uuid": "0e803ad2-0937-43d6-b979-f41b5322092a", 00:09:44.748 "is_configured": false, 00:09:44.748 "data_offset": 0, 00:09:44.748 "data_size": 65536 00:09:44.748 }, 00:09:44.748 { 00:09:44.748 "name": null, 00:09:44.748 "uuid": "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3", 00:09:44.748 "is_configured": false, 00:09:44.748 "data_offset": 0, 00:09:44.748 "data_size": 65536 00:09:44.748 } 00:09:44.748 ] 00:09:44.748 }' 00:09:44.748 17:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.748 17:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.007 [2024-12-07 17:26:18.288927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.007 "name": "Existed_Raid", 00:09:45.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.007 "strip_size_kb": 0, 00:09:45.007 "state": "configuring", 00:09:45.007 "raid_level": "raid1", 00:09:45.007 "superblock": false, 00:09:45.007 "num_base_bdevs": 3, 00:09:45.007 "num_base_bdevs_discovered": 2, 00:09:45.007 "num_base_bdevs_operational": 3, 00:09:45.007 "base_bdevs_list": [ 00:09:45.007 { 00:09:45.007 "name": "BaseBdev1", 00:09:45.007 "uuid": "a8d3742a-c728-4c9f-bbbf-5434794ab965", 00:09:45.007 "is_configured": true, 00:09:45.007 "data_offset": 0, 00:09:45.007 "data_size": 65536 00:09:45.007 }, 00:09:45.007 { 00:09:45.007 "name": null, 00:09:45.007 "uuid": "0e803ad2-0937-43d6-b979-f41b5322092a", 00:09:45.007 "is_configured": false, 00:09:45.007 "data_offset": 0, 00:09:45.007 "data_size": 65536 00:09:45.007 }, 00:09:45.007 { 00:09:45.007 "name": "BaseBdev3", 00:09:45.007 "uuid": "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3", 00:09:45.007 "is_configured": true, 00:09:45.007 "data_offset": 0, 00:09:45.007 "data_size": 65536 00:09:45.007 } 00:09:45.007 ] 00:09:45.007 }' 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.007 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.585 [2024-12-07 17:26:18.716147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.585 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.585 "name": "Existed_Raid", 00:09:45.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.585 "strip_size_kb": 0, 00:09:45.585 "state": "configuring", 00:09:45.585 "raid_level": "raid1", 00:09:45.585 "superblock": false, 00:09:45.585 "num_base_bdevs": 3, 00:09:45.585 "num_base_bdevs_discovered": 1, 00:09:45.585 "num_base_bdevs_operational": 3, 00:09:45.585 "base_bdevs_list": [ 00:09:45.585 { 00:09:45.585 "name": null, 00:09:45.585 "uuid": "a8d3742a-c728-4c9f-bbbf-5434794ab965", 00:09:45.585 "is_configured": false, 00:09:45.585 "data_offset": 0, 00:09:45.585 "data_size": 65536 00:09:45.585 }, 00:09:45.585 { 00:09:45.585 "name": null, 00:09:45.585 "uuid": "0e803ad2-0937-43d6-b979-f41b5322092a", 00:09:45.585 "is_configured": false, 00:09:45.585 "data_offset": 0, 00:09:45.586 "data_size": 65536 00:09:45.586 }, 00:09:45.586 { 00:09:45.586 "name": "BaseBdev3", 00:09:45.586 "uuid": "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3", 00:09:45.586 "is_configured": true, 00:09:45.586 "data_offset": 0, 00:09:45.586 "data_size": 65536 00:09:45.586 } 00:09:45.586 ] 00:09:45.586 }' 00:09:45.586 17:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.586 17:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.152 [2024-12-07 17:26:19.334764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.152 "name": "Existed_Raid", 00:09:46.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.152 "strip_size_kb": 0, 00:09:46.152 "state": "configuring", 00:09:46.152 "raid_level": "raid1", 00:09:46.152 "superblock": false, 00:09:46.152 "num_base_bdevs": 3, 00:09:46.152 "num_base_bdevs_discovered": 2, 00:09:46.152 "num_base_bdevs_operational": 3, 00:09:46.152 "base_bdevs_list": [ 00:09:46.152 { 00:09:46.152 "name": null, 00:09:46.152 "uuid": "a8d3742a-c728-4c9f-bbbf-5434794ab965", 00:09:46.152 "is_configured": false, 00:09:46.152 "data_offset": 0, 00:09:46.152 "data_size": 65536 00:09:46.152 }, 00:09:46.152 { 00:09:46.152 "name": "BaseBdev2", 00:09:46.152 "uuid": "0e803ad2-0937-43d6-b979-f41b5322092a", 00:09:46.152 "is_configured": true, 00:09:46.152 "data_offset": 0, 00:09:46.152 "data_size": 65536 00:09:46.152 }, 00:09:46.152 { 00:09:46.152 "name": "BaseBdev3", 00:09:46.152 "uuid": "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3", 00:09:46.152 "is_configured": true, 00:09:46.152 "data_offset": 0, 00:09:46.152 "data_size": 65536 00:09:46.152 } 00:09:46.152 ] 00:09:46.152 }' 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.152 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a8d3742a-c728-4c9f-bbbf-5434794ab965 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.719 [2024-12-07 17:26:19.929187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:46.719 [2024-12-07 17:26:19.929340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:46.719 [2024-12-07 17:26:19.929369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:46.719 [2024-12-07 17:26:19.929689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:46.719 [2024-12-07 17:26:19.929922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:46.719 [2024-12-07 17:26:19.929991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:46.719 [2024-12-07 17:26:19.930305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.719 NewBaseBdev 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.719 [ 00:09:46.719 { 00:09:46.719 "name": "NewBaseBdev", 00:09:46.719 "aliases": [ 00:09:46.719 "a8d3742a-c728-4c9f-bbbf-5434794ab965" 00:09:46.719 ], 00:09:46.719 "product_name": "Malloc disk", 00:09:46.719 "block_size": 512, 00:09:46.719 "num_blocks": 65536, 00:09:46.719 "uuid": "a8d3742a-c728-4c9f-bbbf-5434794ab965", 00:09:46.719 "assigned_rate_limits": { 00:09:46.719 "rw_ios_per_sec": 0, 00:09:46.719 "rw_mbytes_per_sec": 0, 00:09:46.719 "r_mbytes_per_sec": 0, 00:09:46.719 "w_mbytes_per_sec": 0 00:09:46.719 }, 00:09:46.719 "claimed": true, 00:09:46.719 "claim_type": "exclusive_write", 00:09:46.719 "zoned": false, 00:09:46.719 "supported_io_types": { 00:09:46.719 "read": true, 00:09:46.719 "write": true, 00:09:46.719 "unmap": true, 00:09:46.719 "flush": true, 00:09:46.719 "reset": true, 00:09:46.719 "nvme_admin": false, 00:09:46.719 "nvme_io": false, 00:09:46.719 "nvme_io_md": false, 00:09:46.719 "write_zeroes": true, 00:09:46.719 "zcopy": true, 00:09:46.719 "get_zone_info": false, 00:09:46.719 "zone_management": false, 00:09:46.719 "zone_append": false, 00:09:46.719 "compare": false, 00:09:46.719 "compare_and_write": false, 00:09:46.719 "abort": true, 00:09:46.719 "seek_hole": false, 00:09:46.719 "seek_data": false, 00:09:46.719 "copy": true, 00:09:46.719 "nvme_iov_md": false 00:09:46.719 }, 00:09:46.719 "memory_domains": [ 00:09:46.719 { 00:09:46.719 "dma_device_id": "system", 00:09:46.719 "dma_device_type": 1 00:09:46.719 }, 00:09:46.719 { 00:09:46.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.719 "dma_device_type": 2 00:09:46.719 } 00:09:46.719 ], 00:09:46.719 "driver_specific": {} 00:09:46.719 } 00:09:46.719 ] 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.719 17:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.720 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.720 "name": "Existed_Raid", 00:09:46.720 "uuid": "1d167a90-16bf-4e68-bf25-8a3b13dbae3f", 00:09:46.720 "strip_size_kb": 0, 00:09:46.720 "state": "online", 00:09:46.720 "raid_level": "raid1", 00:09:46.720 "superblock": false, 00:09:46.720 "num_base_bdevs": 3, 00:09:46.720 "num_base_bdevs_discovered": 3, 00:09:46.720 "num_base_bdevs_operational": 3, 00:09:46.720 "base_bdevs_list": [ 00:09:46.720 { 00:09:46.720 "name": "NewBaseBdev", 00:09:46.720 "uuid": "a8d3742a-c728-4c9f-bbbf-5434794ab965", 00:09:46.720 "is_configured": true, 00:09:46.720 "data_offset": 0, 00:09:46.720 "data_size": 65536 00:09:46.720 }, 00:09:46.720 { 00:09:46.720 "name": "BaseBdev2", 00:09:46.720 "uuid": "0e803ad2-0937-43d6-b979-f41b5322092a", 00:09:46.720 "is_configured": true, 00:09:46.720 "data_offset": 0, 00:09:46.720 "data_size": 65536 00:09:46.720 }, 00:09:46.720 { 00:09:46.720 "name": "BaseBdev3", 00:09:46.720 "uuid": "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3", 00:09:46.720 "is_configured": true, 00:09:46.720 "data_offset": 0, 00:09:46.720 "data_size": 65536 00:09:46.720 } 00:09:46.720 ] 00:09:46.720 }' 00:09:46.720 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.720 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.287 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.287 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:47.287 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.287 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.288 [2024-12-07 17:26:20.448709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.288 "name": "Existed_Raid", 00:09:47.288 "aliases": [ 00:09:47.288 "1d167a90-16bf-4e68-bf25-8a3b13dbae3f" 00:09:47.288 ], 00:09:47.288 "product_name": "Raid Volume", 00:09:47.288 "block_size": 512, 00:09:47.288 "num_blocks": 65536, 00:09:47.288 "uuid": "1d167a90-16bf-4e68-bf25-8a3b13dbae3f", 00:09:47.288 "assigned_rate_limits": { 00:09:47.288 "rw_ios_per_sec": 0, 00:09:47.288 "rw_mbytes_per_sec": 0, 00:09:47.288 "r_mbytes_per_sec": 0, 00:09:47.288 "w_mbytes_per_sec": 0 00:09:47.288 }, 00:09:47.288 "claimed": false, 00:09:47.288 "zoned": false, 00:09:47.288 "supported_io_types": { 00:09:47.288 "read": true, 00:09:47.288 "write": true, 00:09:47.288 "unmap": false, 00:09:47.288 "flush": false, 00:09:47.288 "reset": true, 00:09:47.288 "nvme_admin": false, 00:09:47.288 "nvme_io": false, 00:09:47.288 "nvme_io_md": false, 00:09:47.288 "write_zeroes": true, 00:09:47.288 "zcopy": false, 00:09:47.288 "get_zone_info": false, 00:09:47.288 "zone_management": false, 00:09:47.288 "zone_append": false, 00:09:47.288 "compare": false, 00:09:47.288 "compare_and_write": false, 00:09:47.288 "abort": false, 00:09:47.288 "seek_hole": false, 00:09:47.288 "seek_data": false, 00:09:47.288 "copy": false, 00:09:47.288 "nvme_iov_md": false 00:09:47.288 }, 00:09:47.288 "memory_domains": [ 00:09:47.288 { 00:09:47.288 "dma_device_id": "system", 00:09:47.288 "dma_device_type": 1 00:09:47.288 }, 00:09:47.288 { 00:09:47.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.288 "dma_device_type": 2 00:09:47.288 }, 00:09:47.288 { 00:09:47.288 "dma_device_id": "system", 00:09:47.288 "dma_device_type": 1 00:09:47.288 }, 00:09:47.288 { 00:09:47.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.288 "dma_device_type": 2 00:09:47.288 }, 00:09:47.288 { 00:09:47.288 "dma_device_id": "system", 00:09:47.288 "dma_device_type": 1 00:09:47.288 }, 00:09:47.288 { 00:09:47.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.288 "dma_device_type": 2 00:09:47.288 } 00:09:47.288 ], 00:09:47.288 "driver_specific": { 00:09:47.288 "raid": { 00:09:47.288 "uuid": "1d167a90-16bf-4e68-bf25-8a3b13dbae3f", 00:09:47.288 "strip_size_kb": 0, 00:09:47.288 "state": "online", 00:09:47.288 "raid_level": "raid1", 00:09:47.288 "superblock": false, 00:09:47.288 "num_base_bdevs": 3, 00:09:47.288 "num_base_bdevs_discovered": 3, 00:09:47.288 "num_base_bdevs_operational": 3, 00:09:47.288 "base_bdevs_list": [ 00:09:47.288 { 00:09:47.288 "name": "NewBaseBdev", 00:09:47.288 "uuid": "a8d3742a-c728-4c9f-bbbf-5434794ab965", 00:09:47.288 "is_configured": true, 00:09:47.288 "data_offset": 0, 00:09:47.288 "data_size": 65536 00:09:47.288 }, 00:09:47.288 { 00:09:47.288 "name": "BaseBdev2", 00:09:47.288 "uuid": "0e803ad2-0937-43d6-b979-f41b5322092a", 00:09:47.288 "is_configured": true, 00:09:47.288 "data_offset": 0, 00:09:47.288 "data_size": 65536 00:09:47.288 }, 00:09:47.288 { 00:09:47.288 "name": "BaseBdev3", 00:09:47.288 "uuid": "48cf4032-c70a-4b5f-a8fa-6f52e1ba3db3", 00:09:47.288 "is_configured": true, 00:09:47.288 "data_offset": 0, 00:09:47.288 "data_size": 65536 00:09:47.288 } 00:09:47.288 ] 00:09:47.288 } 00:09:47.288 } 00:09:47.288 }' 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:47.288 BaseBdev2 00:09:47.288 BaseBdev3' 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.288 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.548 [2024-12-07 17:26:20.732002] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.548 [2024-12-07 17:26:20.732145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.548 [2024-12-07 17:26:20.732278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.548 [2024-12-07 17:26:20.732605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.548 [2024-12-07 17:26:20.732618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67410 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67410 ']' 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67410 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67410 00:09:47.548 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.549 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.549 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67410' 00:09:47.549 killing process with pid 67410 00:09:47.549 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67410 00:09:47.549 [2024-12-07 17:26:20.775701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.549 17:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67410 00:09:47.809 [2024-12-07 17:26:21.103753] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:49.190 00:09:49.190 real 0m10.780s 00:09:49.190 user 0m16.905s 00:09:49.190 sys 0m1.937s 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.190 ************************************ 00:09:49.190 END TEST raid_state_function_test 00:09:49.190 ************************************ 00:09:49.190 17:26:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:49.190 17:26:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:49.190 17:26:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.190 17:26:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.190 ************************************ 00:09:49.190 START TEST raid_state_function_test_sb 00:09:49.190 ************************************ 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68037 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68037' 00:09:49.190 Process raid pid: 68037 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68037 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68037 ']' 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.190 17:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.191 17:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.191 [2024-12-07 17:26:22.495708] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:49.191 [2024-12-07 17:26:22.495846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.451 [2024-12-07 17:26:22.669040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.451 [2024-12-07 17:26:22.808262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.711 [2024-12-07 17:26:23.052573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.711 [2024-12-07 17:26:23.052620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.971 [2024-12-07 17:26:23.329988] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:49.971 [2024-12-07 17:26:23.330055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:49.971 [2024-12-07 17:26:23.330072] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:49.971 [2024-12-07 17:26:23.330082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:49.971 [2024-12-07 17:26:23.330088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:49.971 [2024-12-07 17:26:23.330097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.971 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.231 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.231 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.231 "name": "Existed_Raid", 00:09:50.231 "uuid": "7241bd95-c422-4054-8e24-3bef6010a90d", 00:09:50.231 "strip_size_kb": 0, 00:09:50.231 "state": "configuring", 00:09:50.231 "raid_level": "raid1", 00:09:50.231 "superblock": true, 00:09:50.231 "num_base_bdevs": 3, 00:09:50.231 "num_base_bdevs_discovered": 0, 00:09:50.231 "num_base_bdevs_operational": 3, 00:09:50.231 "base_bdevs_list": [ 00:09:50.231 { 00:09:50.231 "name": "BaseBdev1", 00:09:50.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.231 "is_configured": false, 00:09:50.231 "data_offset": 0, 00:09:50.231 "data_size": 0 00:09:50.231 }, 00:09:50.231 { 00:09:50.231 "name": "BaseBdev2", 00:09:50.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.231 "is_configured": false, 00:09:50.231 "data_offset": 0, 00:09:50.231 "data_size": 0 00:09:50.231 }, 00:09:50.231 { 00:09:50.231 "name": "BaseBdev3", 00:09:50.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.231 "is_configured": false, 00:09:50.231 "data_offset": 0, 00:09:50.231 "data_size": 0 00:09:50.231 } 00:09:50.231 ] 00:09:50.231 }' 00:09:50.231 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.231 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.490 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:50.490 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.490 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.490 [2024-12-07 17:26:23.753230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.490 [2024-12-07 17:26:23.753290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:50.490 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.490 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.490 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.490 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.490 [2024-12-07 17:26:23.765181] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.490 [2024-12-07 17:26:23.765233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.490 [2024-12-07 17:26:23.765243] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.491 [2024-12-07 17:26:23.765252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.491 [2024-12-07 17:26:23.765258] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.491 [2024-12-07 17:26:23.765267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.491 [2024-12-07 17:26:23.815847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.491 BaseBdev1 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.491 [ 00:09:50.491 { 00:09:50.491 "name": "BaseBdev1", 00:09:50.491 "aliases": [ 00:09:50.491 "88347b9b-b071-46ce-bce6-035f22478dfa" 00:09:50.491 ], 00:09:50.491 "product_name": "Malloc disk", 00:09:50.491 "block_size": 512, 00:09:50.491 "num_blocks": 65536, 00:09:50.491 "uuid": "88347b9b-b071-46ce-bce6-035f22478dfa", 00:09:50.491 "assigned_rate_limits": { 00:09:50.491 "rw_ios_per_sec": 0, 00:09:50.491 "rw_mbytes_per_sec": 0, 00:09:50.491 "r_mbytes_per_sec": 0, 00:09:50.491 "w_mbytes_per_sec": 0 00:09:50.491 }, 00:09:50.491 "claimed": true, 00:09:50.491 "claim_type": "exclusive_write", 00:09:50.491 "zoned": false, 00:09:50.491 "supported_io_types": { 00:09:50.491 "read": true, 00:09:50.491 "write": true, 00:09:50.491 "unmap": true, 00:09:50.491 "flush": true, 00:09:50.491 "reset": true, 00:09:50.491 "nvme_admin": false, 00:09:50.491 "nvme_io": false, 00:09:50.491 "nvme_io_md": false, 00:09:50.491 "write_zeroes": true, 00:09:50.491 "zcopy": true, 00:09:50.491 "get_zone_info": false, 00:09:50.491 "zone_management": false, 00:09:50.491 "zone_append": false, 00:09:50.491 "compare": false, 00:09:50.491 "compare_and_write": false, 00:09:50.491 "abort": true, 00:09:50.491 "seek_hole": false, 00:09:50.491 "seek_data": false, 00:09:50.491 "copy": true, 00:09:50.491 "nvme_iov_md": false 00:09:50.491 }, 00:09:50.491 "memory_domains": [ 00:09:50.491 { 00:09:50.491 "dma_device_id": "system", 00:09:50.491 "dma_device_type": 1 00:09:50.491 }, 00:09:50.491 { 00:09:50.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.491 "dma_device_type": 2 00:09:50.491 } 00:09:50.491 ], 00:09:50.491 "driver_specific": {} 00:09:50.491 } 00:09:50.491 ] 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.491 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.750 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.750 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.750 "name": "Existed_Raid", 00:09:50.750 "uuid": "90f66512-80c8-4baf-85f6-62acf23596dc", 00:09:50.750 "strip_size_kb": 0, 00:09:50.750 "state": "configuring", 00:09:50.750 "raid_level": "raid1", 00:09:50.750 "superblock": true, 00:09:50.750 "num_base_bdevs": 3, 00:09:50.750 "num_base_bdevs_discovered": 1, 00:09:50.750 "num_base_bdevs_operational": 3, 00:09:50.750 "base_bdevs_list": [ 00:09:50.750 { 00:09:50.750 "name": "BaseBdev1", 00:09:50.750 "uuid": "88347b9b-b071-46ce-bce6-035f22478dfa", 00:09:50.750 "is_configured": true, 00:09:50.750 "data_offset": 2048, 00:09:50.750 "data_size": 63488 00:09:50.750 }, 00:09:50.750 { 00:09:50.750 "name": "BaseBdev2", 00:09:50.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.750 "is_configured": false, 00:09:50.750 "data_offset": 0, 00:09:50.750 "data_size": 0 00:09:50.750 }, 00:09:50.750 { 00:09:50.750 "name": "BaseBdev3", 00:09:50.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.750 "is_configured": false, 00:09:50.750 "data_offset": 0, 00:09:50.750 "data_size": 0 00:09:50.750 } 00:09:50.750 ] 00:09:50.750 }' 00:09:50.750 17:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.750 17:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.025 [2024-12-07 17:26:24.287120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.025 [2024-12-07 17:26:24.287198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.025 [2024-12-07 17:26:24.295131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.025 [2024-12-07 17:26:24.297227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.025 [2024-12-07 17:26:24.297271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.025 [2024-12-07 17:26:24.297281] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.025 [2024-12-07 17:26:24.297290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.025 "name": "Existed_Raid", 00:09:51.025 "uuid": "ac9477d8-bf2d-4c97-b02e-251000bbb32f", 00:09:51.025 "strip_size_kb": 0, 00:09:51.025 "state": "configuring", 00:09:51.025 "raid_level": "raid1", 00:09:51.025 "superblock": true, 00:09:51.025 "num_base_bdevs": 3, 00:09:51.025 "num_base_bdevs_discovered": 1, 00:09:51.025 "num_base_bdevs_operational": 3, 00:09:51.025 "base_bdevs_list": [ 00:09:51.025 { 00:09:51.025 "name": "BaseBdev1", 00:09:51.025 "uuid": "88347b9b-b071-46ce-bce6-035f22478dfa", 00:09:51.025 "is_configured": true, 00:09:51.025 "data_offset": 2048, 00:09:51.025 "data_size": 63488 00:09:51.025 }, 00:09:51.025 { 00:09:51.025 "name": "BaseBdev2", 00:09:51.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.025 "is_configured": false, 00:09:51.025 "data_offset": 0, 00:09:51.025 "data_size": 0 00:09:51.025 }, 00:09:51.025 { 00:09:51.025 "name": "BaseBdev3", 00:09:51.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.025 "is_configured": false, 00:09:51.025 "data_offset": 0, 00:09:51.025 "data_size": 0 00:09:51.025 } 00:09:51.025 ] 00:09:51.025 }' 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.025 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.609 [2024-12-07 17:26:24.775004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.609 BaseBdev2 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.609 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.609 [ 00:09:51.609 { 00:09:51.609 "name": "BaseBdev2", 00:09:51.609 "aliases": [ 00:09:51.609 "b0491d7c-44ed-4a81-a69f-ea0193a57be6" 00:09:51.609 ], 00:09:51.609 "product_name": "Malloc disk", 00:09:51.609 "block_size": 512, 00:09:51.609 "num_blocks": 65536, 00:09:51.609 "uuid": "b0491d7c-44ed-4a81-a69f-ea0193a57be6", 00:09:51.609 "assigned_rate_limits": { 00:09:51.609 "rw_ios_per_sec": 0, 00:09:51.609 "rw_mbytes_per_sec": 0, 00:09:51.609 "r_mbytes_per_sec": 0, 00:09:51.609 "w_mbytes_per_sec": 0 00:09:51.609 }, 00:09:51.610 "claimed": true, 00:09:51.610 "claim_type": "exclusive_write", 00:09:51.610 "zoned": false, 00:09:51.610 "supported_io_types": { 00:09:51.610 "read": true, 00:09:51.610 "write": true, 00:09:51.610 "unmap": true, 00:09:51.610 "flush": true, 00:09:51.610 "reset": true, 00:09:51.610 "nvme_admin": false, 00:09:51.610 "nvme_io": false, 00:09:51.610 "nvme_io_md": false, 00:09:51.610 "write_zeroes": true, 00:09:51.610 "zcopy": true, 00:09:51.610 "get_zone_info": false, 00:09:51.610 "zone_management": false, 00:09:51.610 "zone_append": false, 00:09:51.610 "compare": false, 00:09:51.610 "compare_and_write": false, 00:09:51.610 "abort": true, 00:09:51.610 "seek_hole": false, 00:09:51.610 "seek_data": false, 00:09:51.610 "copy": true, 00:09:51.610 "nvme_iov_md": false 00:09:51.610 }, 00:09:51.610 "memory_domains": [ 00:09:51.610 { 00:09:51.610 "dma_device_id": "system", 00:09:51.610 "dma_device_type": 1 00:09:51.610 }, 00:09:51.610 { 00:09:51.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.610 "dma_device_type": 2 00:09:51.610 } 00:09:51.610 ], 00:09:51.610 "driver_specific": {} 00:09:51.610 } 00:09:51.610 ] 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.610 "name": "Existed_Raid", 00:09:51.610 "uuid": "ac9477d8-bf2d-4c97-b02e-251000bbb32f", 00:09:51.610 "strip_size_kb": 0, 00:09:51.610 "state": "configuring", 00:09:51.610 "raid_level": "raid1", 00:09:51.610 "superblock": true, 00:09:51.610 "num_base_bdevs": 3, 00:09:51.610 "num_base_bdevs_discovered": 2, 00:09:51.610 "num_base_bdevs_operational": 3, 00:09:51.610 "base_bdevs_list": [ 00:09:51.610 { 00:09:51.610 "name": "BaseBdev1", 00:09:51.610 "uuid": "88347b9b-b071-46ce-bce6-035f22478dfa", 00:09:51.610 "is_configured": true, 00:09:51.610 "data_offset": 2048, 00:09:51.610 "data_size": 63488 00:09:51.610 }, 00:09:51.610 { 00:09:51.610 "name": "BaseBdev2", 00:09:51.610 "uuid": "b0491d7c-44ed-4a81-a69f-ea0193a57be6", 00:09:51.610 "is_configured": true, 00:09:51.610 "data_offset": 2048, 00:09:51.610 "data_size": 63488 00:09:51.610 }, 00:09:51.610 { 00:09:51.610 "name": "BaseBdev3", 00:09:51.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.610 "is_configured": false, 00:09:51.610 "data_offset": 0, 00:09:51.610 "data_size": 0 00:09:51.610 } 00:09:51.610 ] 00:09:51.610 }' 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.610 17:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.869 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.869 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.869 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.128 [2024-12-07 17:26:25.270532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.128 [2024-12-07 17:26:25.270802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.128 [2024-12-07 17:26:25.270825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:52.128 [2024-12-07 17:26:25.271168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:52.128 [2024-12-07 17:26:25.271341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.128 [2024-12-07 17:26:25.271356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:52.128 BaseBdev3 00:09:52.128 [2024-12-07 17:26:25.271511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.128 [ 00:09:52.128 { 00:09:52.128 "name": "BaseBdev3", 00:09:52.128 "aliases": [ 00:09:52.128 "8601efa2-9242-4d25-91cf-f9185fa09a2c" 00:09:52.128 ], 00:09:52.128 "product_name": "Malloc disk", 00:09:52.128 "block_size": 512, 00:09:52.128 "num_blocks": 65536, 00:09:52.128 "uuid": "8601efa2-9242-4d25-91cf-f9185fa09a2c", 00:09:52.128 "assigned_rate_limits": { 00:09:52.128 "rw_ios_per_sec": 0, 00:09:52.128 "rw_mbytes_per_sec": 0, 00:09:52.128 "r_mbytes_per_sec": 0, 00:09:52.128 "w_mbytes_per_sec": 0 00:09:52.128 }, 00:09:52.128 "claimed": true, 00:09:52.128 "claim_type": "exclusive_write", 00:09:52.128 "zoned": false, 00:09:52.128 "supported_io_types": { 00:09:52.128 "read": true, 00:09:52.128 "write": true, 00:09:52.128 "unmap": true, 00:09:52.128 "flush": true, 00:09:52.128 "reset": true, 00:09:52.128 "nvme_admin": false, 00:09:52.128 "nvme_io": false, 00:09:52.128 "nvme_io_md": false, 00:09:52.128 "write_zeroes": true, 00:09:52.128 "zcopy": true, 00:09:52.128 "get_zone_info": false, 00:09:52.128 "zone_management": false, 00:09:52.128 "zone_append": false, 00:09:52.128 "compare": false, 00:09:52.128 "compare_and_write": false, 00:09:52.128 "abort": true, 00:09:52.128 "seek_hole": false, 00:09:52.128 "seek_data": false, 00:09:52.128 "copy": true, 00:09:52.128 "nvme_iov_md": false 00:09:52.128 }, 00:09:52.128 "memory_domains": [ 00:09:52.128 { 00:09:52.128 "dma_device_id": "system", 00:09:52.128 "dma_device_type": 1 00:09:52.128 }, 00:09:52.128 { 00:09:52.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.128 "dma_device_type": 2 00:09:52.128 } 00:09:52.128 ], 00:09:52.128 "driver_specific": {} 00:09:52.128 } 00:09:52.128 ] 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.128 "name": "Existed_Raid", 00:09:52.128 "uuid": "ac9477d8-bf2d-4c97-b02e-251000bbb32f", 00:09:52.128 "strip_size_kb": 0, 00:09:52.128 "state": "online", 00:09:52.128 "raid_level": "raid1", 00:09:52.128 "superblock": true, 00:09:52.128 "num_base_bdevs": 3, 00:09:52.128 "num_base_bdevs_discovered": 3, 00:09:52.128 "num_base_bdevs_operational": 3, 00:09:52.128 "base_bdevs_list": [ 00:09:52.128 { 00:09:52.128 "name": "BaseBdev1", 00:09:52.128 "uuid": "88347b9b-b071-46ce-bce6-035f22478dfa", 00:09:52.128 "is_configured": true, 00:09:52.128 "data_offset": 2048, 00:09:52.128 "data_size": 63488 00:09:52.128 }, 00:09:52.128 { 00:09:52.128 "name": "BaseBdev2", 00:09:52.128 "uuid": "b0491d7c-44ed-4a81-a69f-ea0193a57be6", 00:09:52.128 "is_configured": true, 00:09:52.128 "data_offset": 2048, 00:09:52.128 "data_size": 63488 00:09:52.128 }, 00:09:52.128 { 00:09:52.128 "name": "BaseBdev3", 00:09:52.128 "uuid": "8601efa2-9242-4d25-91cf-f9185fa09a2c", 00:09:52.128 "is_configured": true, 00:09:52.128 "data_offset": 2048, 00:09:52.128 "data_size": 63488 00:09:52.128 } 00:09:52.128 ] 00:09:52.128 }' 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.128 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.388 [2024-12-07 17:26:25.706333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.388 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.388 "name": "Existed_Raid", 00:09:52.388 "aliases": [ 00:09:52.388 "ac9477d8-bf2d-4c97-b02e-251000bbb32f" 00:09:52.388 ], 00:09:52.388 "product_name": "Raid Volume", 00:09:52.388 "block_size": 512, 00:09:52.388 "num_blocks": 63488, 00:09:52.388 "uuid": "ac9477d8-bf2d-4c97-b02e-251000bbb32f", 00:09:52.388 "assigned_rate_limits": { 00:09:52.388 "rw_ios_per_sec": 0, 00:09:52.388 "rw_mbytes_per_sec": 0, 00:09:52.388 "r_mbytes_per_sec": 0, 00:09:52.388 "w_mbytes_per_sec": 0 00:09:52.388 }, 00:09:52.388 "claimed": false, 00:09:52.388 "zoned": false, 00:09:52.388 "supported_io_types": { 00:09:52.388 "read": true, 00:09:52.388 "write": true, 00:09:52.388 "unmap": false, 00:09:52.388 "flush": false, 00:09:52.388 "reset": true, 00:09:52.388 "nvme_admin": false, 00:09:52.389 "nvme_io": false, 00:09:52.389 "nvme_io_md": false, 00:09:52.389 "write_zeroes": true, 00:09:52.389 "zcopy": false, 00:09:52.389 "get_zone_info": false, 00:09:52.389 "zone_management": false, 00:09:52.389 "zone_append": false, 00:09:52.389 "compare": false, 00:09:52.389 "compare_and_write": false, 00:09:52.389 "abort": false, 00:09:52.389 "seek_hole": false, 00:09:52.389 "seek_data": false, 00:09:52.389 "copy": false, 00:09:52.389 "nvme_iov_md": false 00:09:52.389 }, 00:09:52.389 "memory_domains": [ 00:09:52.389 { 00:09:52.389 "dma_device_id": "system", 00:09:52.389 "dma_device_type": 1 00:09:52.389 }, 00:09:52.389 { 00:09:52.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.389 "dma_device_type": 2 00:09:52.389 }, 00:09:52.389 { 00:09:52.389 "dma_device_id": "system", 00:09:52.389 "dma_device_type": 1 00:09:52.389 }, 00:09:52.389 { 00:09:52.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.389 "dma_device_type": 2 00:09:52.389 }, 00:09:52.389 { 00:09:52.389 "dma_device_id": "system", 00:09:52.389 "dma_device_type": 1 00:09:52.389 }, 00:09:52.389 { 00:09:52.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.389 "dma_device_type": 2 00:09:52.389 } 00:09:52.389 ], 00:09:52.389 "driver_specific": { 00:09:52.389 "raid": { 00:09:52.389 "uuid": "ac9477d8-bf2d-4c97-b02e-251000bbb32f", 00:09:52.389 "strip_size_kb": 0, 00:09:52.389 "state": "online", 00:09:52.389 "raid_level": "raid1", 00:09:52.389 "superblock": true, 00:09:52.389 "num_base_bdevs": 3, 00:09:52.389 "num_base_bdevs_discovered": 3, 00:09:52.389 "num_base_bdevs_operational": 3, 00:09:52.389 "base_bdevs_list": [ 00:09:52.389 { 00:09:52.389 "name": "BaseBdev1", 00:09:52.389 "uuid": "88347b9b-b071-46ce-bce6-035f22478dfa", 00:09:52.389 "is_configured": true, 00:09:52.389 "data_offset": 2048, 00:09:52.389 "data_size": 63488 00:09:52.389 }, 00:09:52.389 { 00:09:52.389 "name": "BaseBdev2", 00:09:52.389 "uuid": "b0491d7c-44ed-4a81-a69f-ea0193a57be6", 00:09:52.389 "is_configured": true, 00:09:52.389 "data_offset": 2048, 00:09:52.389 "data_size": 63488 00:09:52.389 }, 00:09:52.389 { 00:09:52.389 "name": "BaseBdev3", 00:09:52.389 "uuid": "8601efa2-9242-4d25-91cf-f9185fa09a2c", 00:09:52.389 "is_configured": true, 00:09:52.389 "data_offset": 2048, 00:09:52.389 "data_size": 63488 00:09:52.389 } 00:09:52.389 ] 00:09:52.389 } 00:09:52.389 } 00:09:52.389 }' 00:09:52.389 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:52.649 BaseBdev2 00:09:52.649 BaseBdev3' 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.649 17:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.649 [2024-12-07 17:26:25.937620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.909 "name": "Existed_Raid", 00:09:52.909 "uuid": "ac9477d8-bf2d-4c97-b02e-251000bbb32f", 00:09:52.909 "strip_size_kb": 0, 00:09:52.909 "state": "online", 00:09:52.909 "raid_level": "raid1", 00:09:52.909 "superblock": true, 00:09:52.909 "num_base_bdevs": 3, 00:09:52.909 "num_base_bdevs_discovered": 2, 00:09:52.909 "num_base_bdevs_operational": 2, 00:09:52.909 "base_bdevs_list": [ 00:09:52.909 { 00:09:52.909 "name": null, 00:09:52.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.909 "is_configured": false, 00:09:52.909 "data_offset": 0, 00:09:52.909 "data_size": 63488 00:09:52.909 }, 00:09:52.909 { 00:09:52.909 "name": "BaseBdev2", 00:09:52.909 "uuid": "b0491d7c-44ed-4a81-a69f-ea0193a57be6", 00:09:52.909 "is_configured": true, 00:09:52.909 "data_offset": 2048, 00:09:52.909 "data_size": 63488 00:09:52.909 }, 00:09:52.909 { 00:09:52.909 "name": "BaseBdev3", 00:09:52.909 "uuid": "8601efa2-9242-4d25-91cf-f9185fa09a2c", 00:09:52.909 "is_configured": true, 00:09:52.909 "data_offset": 2048, 00:09:52.909 "data_size": 63488 00:09:52.909 } 00:09:52.909 ] 00:09:52.909 }' 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.909 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.168 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.168 [2024-12-07 17:26:26.537011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.427 [2024-12-07 17:26:26.699590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:53.427 [2024-12-07 17:26:26.699725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.427 [2024-12-07 17:26:26.805589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.427 [2024-12-07 17:26:26.805654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.427 [2024-12-07 17:26:26.805669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:53.427 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.687 BaseBdev2 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.687 [ 00:09:53.687 { 00:09:53.687 "name": "BaseBdev2", 00:09:53.687 "aliases": [ 00:09:53.687 "23360414-8dcd-40ef-a190-281c570cad31" 00:09:53.687 ], 00:09:53.687 "product_name": "Malloc disk", 00:09:53.687 "block_size": 512, 00:09:53.687 "num_blocks": 65536, 00:09:53.687 "uuid": "23360414-8dcd-40ef-a190-281c570cad31", 00:09:53.687 "assigned_rate_limits": { 00:09:53.687 "rw_ios_per_sec": 0, 00:09:53.687 "rw_mbytes_per_sec": 0, 00:09:53.687 "r_mbytes_per_sec": 0, 00:09:53.687 "w_mbytes_per_sec": 0 00:09:53.687 }, 00:09:53.687 "claimed": false, 00:09:53.687 "zoned": false, 00:09:53.687 "supported_io_types": { 00:09:53.687 "read": true, 00:09:53.687 "write": true, 00:09:53.687 "unmap": true, 00:09:53.687 "flush": true, 00:09:53.687 "reset": true, 00:09:53.687 "nvme_admin": false, 00:09:53.687 "nvme_io": false, 00:09:53.687 "nvme_io_md": false, 00:09:53.687 "write_zeroes": true, 00:09:53.687 "zcopy": true, 00:09:53.687 "get_zone_info": false, 00:09:53.687 "zone_management": false, 00:09:53.687 "zone_append": false, 00:09:53.687 "compare": false, 00:09:53.687 "compare_and_write": false, 00:09:53.687 "abort": true, 00:09:53.687 "seek_hole": false, 00:09:53.687 "seek_data": false, 00:09:53.687 "copy": true, 00:09:53.687 "nvme_iov_md": false 00:09:53.687 }, 00:09:53.687 "memory_domains": [ 00:09:53.687 { 00:09:53.687 "dma_device_id": "system", 00:09:53.687 "dma_device_type": 1 00:09:53.687 }, 00:09:53.687 { 00:09:53.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.687 "dma_device_type": 2 00:09:53.687 } 00:09:53.687 ], 00:09:53.687 "driver_specific": {} 00:09:53.687 } 00:09:53.687 ] 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.687 BaseBdev3 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:53.687 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.688 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:53.688 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.688 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.688 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:53.688 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.688 17:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.688 [ 00:09:53.688 { 00:09:53.688 "name": "BaseBdev3", 00:09:53.688 "aliases": [ 00:09:53.688 "0a664234-7c45-445c-96aa-1d851453a9c1" 00:09:53.688 ], 00:09:53.688 "product_name": "Malloc disk", 00:09:53.688 "block_size": 512, 00:09:53.688 "num_blocks": 65536, 00:09:53.688 "uuid": "0a664234-7c45-445c-96aa-1d851453a9c1", 00:09:53.688 "assigned_rate_limits": { 00:09:53.688 "rw_ios_per_sec": 0, 00:09:53.688 "rw_mbytes_per_sec": 0, 00:09:53.688 "r_mbytes_per_sec": 0, 00:09:53.688 "w_mbytes_per_sec": 0 00:09:53.688 }, 00:09:53.688 "claimed": false, 00:09:53.688 "zoned": false, 00:09:53.688 "supported_io_types": { 00:09:53.688 "read": true, 00:09:53.688 "write": true, 00:09:53.688 "unmap": true, 00:09:53.688 "flush": true, 00:09:53.688 "reset": true, 00:09:53.688 "nvme_admin": false, 00:09:53.688 "nvme_io": false, 00:09:53.688 "nvme_io_md": false, 00:09:53.688 "write_zeroes": true, 00:09:53.688 "zcopy": true, 00:09:53.688 "get_zone_info": false, 00:09:53.688 "zone_management": false, 00:09:53.688 "zone_append": false, 00:09:53.688 "compare": false, 00:09:53.688 "compare_and_write": false, 00:09:53.688 "abort": true, 00:09:53.688 "seek_hole": false, 00:09:53.688 "seek_data": false, 00:09:53.688 "copy": true, 00:09:53.688 "nvme_iov_md": false 00:09:53.688 }, 00:09:53.688 "memory_domains": [ 00:09:53.688 { 00:09:53.688 "dma_device_id": "system", 00:09:53.688 "dma_device_type": 1 00:09:53.688 }, 00:09:53.688 { 00:09:53.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.688 "dma_device_type": 2 00:09:53.688 } 00:09:53.688 ], 00:09:53.688 "driver_specific": {} 00:09:53.688 } 00:09:53.688 ] 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.688 [2024-12-07 17:26:27.032909] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.688 [2024-12-07 17:26:27.032980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.688 [2024-12-07 17:26:27.032999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.688 [2024-12-07 17:26:27.034969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.688 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.947 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.947 "name": "Existed_Raid", 00:09:53.947 "uuid": "e0832701-0dbc-4562-a83b-d236872ea3b0", 00:09:53.947 "strip_size_kb": 0, 00:09:53.947 "state": "configuring", 00:09:53.947 "raid_level": "raid1", 00:09:53.947 "superblock": true, 00:09:53.948 "num_base_bdevs": 3, 00:09:53.948 "num_base_bdevs_discovered": 2, 00:09:53.948 "num_base_bdevs_operational": 3, 00:09:53.948 "base_bdevs_list": [ 00:09:53.948 { 00:09:53.948 "name": "BaseBdev1", 00:09:53.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.948 "is_configured": false, 00:09:53.948 "data_offset": 0, 00:09:53.948 "data_size": 0 00:09:53.948 }, 00:09:53.948 { 00:09:53.948 "name": "BaseBdev2", 00:09:53.948 "uuid": "23360414-8dcd-40ef-a190-281c570cad31", 00:09:53.948 "is_configured": true, 00:09:53.948 "data_offset": 2048, 00:09:53.948 "data_size": 63488 00:09:53.948 }, 00:09:53.948 { 00:09:53.948 "name": "BaseBdev3", 00:09:53.948 "uuid": "0a664234-7c45-445c-96aa-1d851453a9c1", 00:09:53.948 "is_configured": true, 00:09:53.948 "data_offset": 2048, 00:09:53.948 "data_size": 63488 00:09:53.948 } 00:09:53.948 ] 00:09:53.948 }' 00:09:53.948 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.948 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.206 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.207 [2024-12-07 17:26:27.464239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.207 "name": "Existed_Raid", 00:09:54.207 "uuid": "e0832701-0dbc-4562-a83b-d236872ea3b0", 00:09:54.207 "strip_size_kb": 0, 00:09:54.207 "state": "configuring", 00:09:54.207 "raid_level": "raid1", 00:09:54.207 "superblock": true, 00:09:54.207 "num_base_bdevs": 3, 00:09:54.207 "num_base_bdevs_discovered": 1, 00:09:54.207 "num_base_bdevs_operational": 3, 00:09:54.207 "base_bdevs_list": [ 00:09:54.207 { 00:09:54.207 "name": "BaseBdev1", 00:09:54.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.207 "is_configured": false, 00:09:54.207 "data_offset": 0, 00:09:54.207 "data_size": 0 00:09:54.207 }, 00:09:54.207 { 00:09:54.207 "name": null, 00:09:54.207 "uuid": "23360414-8dcd-40ef-a190-281c570cad31", 00:09:54.207 "is_configured": false, 00:09:54.207 "data_offset": 0, 00:09:54.207 "data_size": 63488 00:09:54.207 }, 00:09:54.207 { 00:09:54.207 "name": "BaseBdev3", 00:09:54.207 "uuid": "0a664234-7c45-445c-96aa-1d851453a9c1", 00:09:54.207 "is_configured": true, 00:09:54.207 "data_offset": 2048, 00:09:54.207 "data_size": 63488 00:09:54.207 } 00:09:54.207 ] 00:09:54.207 }' 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.207 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.777 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.777 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.777 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.777 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.777 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.777 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:54.777 17:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.777 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.777 17:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.777 [2024-12-07 17:26:28.001105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.777 BaseBdev1 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.777 [ 00:09:54.777 { 00:09:54.777 "name": "BaseBdev1", 00:09:54.777 "aliases": [ 00:09:54.777 "fd803916-ebd8-4af0-ae6a-48b4f6325bf1" 00:09:54.777 ], 00:09:54.777 "product_name": "Malloc disk", 00:09:54.777 "block_size": 512, 00:09:54.777 "num_blocks": 65536, 00:09:54.777 "uuid": "fd803916-ebd8-4af0-ae6a-48b4f6325bf1", 00:09:54.777 "assigned_rate_limits": { 00:09:54.777 "rw_ios_per_sec": 0, 00:09:54.777 "rw_mbytes_per_sec": 0, 00:09:54.777 "r_mbytes_per_sec": 0, 00:09:54.777 "w_mbytes_per_sec": 0 00:09:54.777 }, 00:09:54.777 "claimed": true, 00:09:54.777 "claim_type": "exclusive_write", 00:09:54.777 "zoned": false, 00:09:54.777 "supported_io_types": { 00:09:54.777 "read": true, 00:09:54.777 "write": true, 00:09:54.777 "unmap": true, 00:09:54.777 "flush": true, 00:09:54.777 "reset": true, 00:09:54.777 "nvme_admin": false, 00:09:54.777 "nvme_io": false, 00:09:54.777 "nvme_io_md": false, 00:09:54.777 "write_zeroes": true, 00:09:54.777 "zcopy": true, 00:09:54.777 "get_zone_info": false, 00:09:54.777 "zone_management": false, 00:09:54.777 "zone_append": false, 00:09:54.777 "compare": false, 00:09:54.777 "compare_and_write": false, 00:09:54.777 "abort": true, 00:09:54.777 "seek_hole": false, 00:09:54.777 "seek_data": false, 00:09:54.777 "copy": true, 00:09:54.777 "nvme_iov_md": false 00:09:54.777 }, 00:09:54.777 "memory_domains": [ 00:09:54.777 { 00:09:54.777 "dma_device_id": "system", 00:09:54.777 "dma_device_type": 1 00:09:54.777 }, 00:09:54.777 { 00:09:54.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.777 "dma_device_type": 2 00:09:54.777 } 00:09:54.777 ], 00:09:54.777 "driver_specific": {} 00:09:54.777 } 00:09:54.777 ] 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.777 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.778 "name": "Existed_Raid", 00:09:54.778 "uuid": "e0832701-0dbc-4562-a83b-d236872ea3b0", 00:09:54.778 "strip_size_kb": 0, 00:09:54.778 "state": "configuring", 00:09:54.778 "raid_level": "raid1", 00:09:54.778 "superblock": true, 00:09:54.778 "num_base_bdevs": 3, 00:09:54.778 "num_base_bdevs_discovered": 2, 00:09:54.778 "num_base_bdevs_operational": 3, 00:09:54.778 "base_bdevs_list": [ 00:09:54.778 { 00:09:54.778 "name": "BaseBdev1", 00:09:54.778 "uuid": "fd803916-ebd8-4af0-ae6a-48b4f6325bf1", 00:09:54.778 "is_configured": true, 00:09:54.778 "data_offset": 2048, 00:09:54.778 "data_size": 63488 00:09:54.778 }, 00:09:54.778 { 00:09:54.778 "name": null, 00:09:54.778 "uuid": "23360414-8dcd-40ef-a190-281c570cad31", 00:09:54.778 "is_configured": false, 00:09:54.778 "data_offset": 0, 00:09:54.778 "data_size": 63488 00:09:54.778 }, 00:09:54.778 { 00:09:54.778 "name": "BaseBdev3", 00:09:54.778 "uuid": "0a664234-7c45-445c-96aa-1d851453a9c1", 00:09:54.778 "is_configured": true, 00:09:54.778 "data_offset": 2048, 00:09:54.778 "data_size": 63488 00:09:54.778 } 00:09:54.778 ] 00:09:54.778 }' 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.778 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.346 [2024-12-07 17:26:28.500290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.346 "name": "Existed_Raid", 00:09:55.346 "uuid": "e0832701-0dbc-4562-a83b-d236872ea3b0", 00:09:55.346 "strip_size_kb": 0, 00:09:55.346 "state": "configuring", 00:09:55.346 "raid_level": "raid1", 00:09:55.346 "superblock": true, 00:09:55.346 "num_base_bdevs": 3, 00:09:55.346 "num_base_bdevs_discovered": 1, 00:09:55.346 "num_base_bdevs_operational": 3, 00:09:55.346 "base_bdevs_list": [ 00:09:55.346 { 00:09:55.346 "name": "BaseBdev1", 00:09:55.346 "uuid": "fd803916-ebd8-4af0-ae6a-48b4f6325bf1", 00:09:55.346 "is_configured": true, 00:09:55.346 "data_offset": 2048, 00:09:55.346 "data_size": 63488 00:09:55.346 }, 00:09:55.346 { 00:09:55.346 "name": null, 00:09:55.346 "uuid": "23360414-8dcd-40ef-a190-281c570cad31", 00:09:55.346 "is_configured": false, 00:09:55.346 "data_offset": 0, 00:09:55.346 "data_size": 63488 00:09:55.346 }, 00:09:55.346 { 00:09:55.346 "name": null, 00:09:55.346 "uuid": "0a664234-7c45-445c-96aa-1d851453a9c1", 00:09:55.346 "is_configured": false, 00:09:55.346 "data_offset": 0, 00:09:55.346 "data_size": 63488 00:09:55.346 } 00:09:55.346 ] 00:09:55.346 }' 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.346 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.606 [2024-12-07 17:26:28.959553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.606 17:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.865 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.865 "name": "Existed_Raid", 00:09:55.865 "uuid": "e0832701-0dbc-4562-a83b-d236872ea3b0", 00:09:55.865 "strip_size_kb": 0, 00:09:55.865 "state": "configuring", 00:09:55.865 "raid_level": "raid1", 00:09:55.865 "superblock": true, 00:09:55.865 "num_base_bdevs": 3, 00:09:55.865 "num_base_bdevs_discovered": 2, 00:09:55.865 "num_base_bdevs_operational": 3, 00:09:55.865 "base_bdevs_list": [ 00:09:55.865 { 00:09:55.865 "name": "BaseBdev1", 00:09:55.865 "uuid": "fd803916-ebd8-4af0-ae6a-48b4f6325bf1", 00:09:55.865 "is_configured": true, 00:09:55.865 "data_offset": 2048, 00:09:55.865 "data_size": 63488 00:09:55.865 }, 00:09:55.865 { 00:09:55.865 "name": null, 00:09:55.865 "uuid": "23360414-8dcd-40ef-a190-281c570cad31", 00:09:55.865 "is_configured": false, 00:09:55.865 "data_offset": 0, 00:09:55.865 "data_size": 63488 00:09:55.865 }, 00:09:55.865 { 00:09:55.865 "name": "BaseBdev3", 00:09:55.865 "uuid": "0a664234-7c45-445c-96aa-1d851453a9c1", 00:09:55.865 "is_configured": true, 00:09:55.865 "data_offset": 2048, 00:09:55.865 "data_size": 63488 00:09:55.865 } 00:09:55.865 ] 00:09:55.865 }' 00:09:55.865 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.865 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.125 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.125 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:56.125 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.125 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.125 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.125 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:56.125 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.125 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.125 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.125 [2024-12-07 17:26:29.462730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.385 "name": "Existed_Raid", 00:09:56.385 "uuid": "e0832701-0dbc-4562-a83b-d236872ea3b0", 00:09:56.385 "strip_size_kb": 0, 00:09:56.385 "state": "configuring", 00:09:56.385 "raid_level": "raid1", 00:09:56.385 "superblock": true, 00:09:56.385 "num_base_bdevs": 3, 00:09:56.385 "num_base_bdevs_discovered": 1, 00:09:56.385 "num_base_bdevs_operational": 3, 00:09:56.385 "base_bdevs_list": [ 00:09:56.385 { 00:09:56.385 "name": null, 00:09:56.385 "uuid": "fd803916-ebd8-4af0-ae6a-48b4f6325bf1", 00:09:56.385 "is_configured": false, 00:09:56.385 "data_offset": 0, 00:09:56.385 "data_size": 63488 00:09:56.385 }, 00:09:56.385 { 00:09:56.385 "name": null, 00:09:56.385 "uuid": "23360414-8dcd-40ef-a190-281c570cad31", 00:09:56.385 "is_configured": false, 00:09:56.385 "data_offset": 0, 00:09:56.385 "data_size": 63488 00:09:56.385 }, 00:09:56.385 { 00:09:56.385 "name": "BaseBdev3", 00:09:56.385 "uuid": "0a664234-7c45-445c-96aa-1d851453a9c1", 00:09:56.385 "is_configured": true, 00:09:56.385 "data_offset": 2048, 00:09:56.385 "data_size": 63488 00:09:56.385 } 00:09:56.385 ] 00:09:56.385 }' 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.385 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.644 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.644 17:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:56.644 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.644 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.644 17:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.903 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:56.903 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:56.903 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.903 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.904 [2024-12-07 17:26:30.035891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.904 "name": "Existed_Raid", 00:09:56.904 "uuid": "e0832701-0dbc-4562-a83b-d236872ea3b0", 00:09:56.904 "strip_size_kb": 0, 00:09:56.904 "state": "configuring", 00:09:56.904 "raid_level": "raid1", 00:09:56.904 "superblock": true, 00:09:56.904 "num_base_bdevs": 3, 00:09:56.904 "num_base_bdevs_discovered": 2, 00:09:56.904 "num_base_bdevs_operational": 3, 00:09:56.904 "base_bdevs_list": [ 00:09:56.904 { 00:09:56.904 "name": null, 00:09:56.904 "uuid": "fd803916-ebd8-4af0-ae6a-48b4f6325bf1", 00:09:56.904 "is_configured": false, 00:09:56.904 "data_offset": 0, 00:09:56.904 "data_size": 63488 00:09:56.904 }, 00:09:56.904 { 00:09:56.904 "name": "BaseBdev2", 00:09:56.904 "uuid": "23360414-8dcd-40ef-a190-281c570cad31", 00:09:56.904 "is_configured": true, 00:09:56.904 "data_offset": 2048, 00:09:56.904 "data_size": 63488 00:09:56.904 }, 00:09:56.904 { 00:09:56.904 "name": "BaseBdev3", 00:09:56.904 "uuid": "0a664234-7c45-445c-96aa-1d851453a9c1", 00:09:56.904 "is_configured": true, 00:09:56.904 "data_offset": 2048, 00:09:56.904 "data_size": 63488 00:09:56.904 } 00:09:56.904 ] 00:09:56.904 }' 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.904 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.163 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fd803916-ebd8-4af0-ae6a-48b4f6325bf1 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.423 [2024-12-07 17:26:30.599796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:57.423 [2024-12-07 17:26:30.600078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:57.423 [2024-12-07 17:26:30.600111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:57.423 [2024-12-07 17:26:30.600380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:57.423 NewBaseBdev 00:09:57.423 [2024-12-07 17:26:30.600535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:57.423 [2024-12-07 17:26:30.600548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:57.423 [2024-12-07 17:26:30.600677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.423 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.424 [ 00:09:57.424 { 00:09:57.424 "name": "NewBaseBdev", 00:09:57.424 "aliases": [ 00:09:57.424 "fd803916-ebd8-4af0-ae6a-48b4f6325bf1" 00:09:57.424 ], 00:09:57.424 "product_name": "Malloc disk", 00:09:57.424 "block_size": 512, 00:09:57.424 "num_blocks": 65536, 00:09:57.424 "uuid": "fd803916-ebd8-4af0-ae6a-48b4f6325bf1", 00:09:57.424 "assigned_rate_limits": { 00:09:57.424 "rw_ios_per_sec": 0, 00:09:57.424 "rw_mbytes_per_sec": 0, 00:09:57.424 "r_mbytes_per_sec": 0, 00:09:57.424 "w_mbytes_per_sec": 0 00:09:57.424 }, 00:09:57.424 "claimed": true, 00:09:57.424 "claim_type": "exclusive_write", 00:09:57.424 "zoned": false, 00:09:57.424 "supported_io_types": { 00:09:57.424 "read": true, 00:09:57.424 "write": true, 00:09:57.424 "unmap": true, 00:09:57.424 "flush": true, 00:09:57.424 "reset": true, 00:09:57.424 "nvme_admin": false, 00:09:57.424 "nvme_io": false, 00:09:57.424 "nvme_io_md": false, 00:09:57.424 "write_zeroes": true, 00:09:57.424 "zcopy": true, 00:09:57.424 "get_zone_info": false, 00:09:57.424 "zone_management": false, 00:09:57.424 "zone_append": false, 00:09:57.424 "compare": false, 00:09:57.424 "compare_and_write": false, 00:09:57.424 "abort": true, 00:09:57.424 "seek_hole": false, 00:09:57.424 "seek_data": false, 00:09:57.424 "copy": true, 00:09:57.424 "nvme_iov_md": false 00:09:57.424 }, 00:09:57.424 "memory_domains": [ 00:09:57.424 { 00:09:57.424 "dma_device_id": "system", 00:09:57.424 "dma_device_type": 1 00:09:57.424 }, 00:09:57.424 { 00:09:57.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.424 "dma_device_type": 2 00:09:57.424 } 00:09:57.424 ], 00:09:57.424 "driver_specific": {} 00:09:57.424 } 00:09:57.424 ] 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.424 "name": "Existed_Raid", 00:09:57.424 "uuid": "e0832701-0dbc-4562-a83b-d236872ea3b0", 00:09:57.424 "strip_size_kb": 0, 00:09:57.424 "state": "online", 00:09:57.424 "raid_level": "raid1", 00:09:57.424 "superblock": true, 00:09:57.424 "num_base_bdevs": 3, 00:09:57.424 "num_base_bdevs_discovered": 3, 00:09:57.424 "num_base_bdevs_operational": 3, 00:09:57.424 "base_bdevs_list": [ 00:09:57.424 { 00:09:57.424 "name": "NewBaseBdev", 00:09:57.424 "uuid": "fd803916-ebd8-4af0-ae6a-48b4f6325bf1", 00:09:57.424 "is_configured": true, 00:09:57.424 "data_offset": 2048, 00:09:57.424 "data_size": 63488 00:09:57.424 }, 00:09:57.424 { 00:09:57.424 "name": "BaseBdev2", 00:09:57.424 "uuid": "23360414-8dcd-40ef-a190-281c570cad31", 00:09:57.424 "is_configured": true, 00:09:57.424 "data_offset": 2048, 00:09:57.424 "data_size": 63488 00:09:57.424 }, 00:09:57.424 { 00:09:57.424 "name": "BaseBdev3", 00:09:57.424 "uuid": "0a664234-7c45-445c-96aa-1d851453a9c1", 00:09:57.424 "is_configured": true, 00:09:57.424 "data_offset": 2048, 00:09:57.424 "data_size": 63488 00:09:57.424 } 00:09:57.424 ] 00:09:57.424 }' 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.424 17:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.991 [2024-12-07 17:26:31.083406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.991 "name": "Existed_Raid", 00:09:57.991 "aliases": [ 00:09:57.991 "e0832701-0dbc-4562-a83b-d236872ea3b0" 00:09:57.991 ], 00:09:57.991 "product_name": "Raid Volume", 00:09:57.991 "block_size": 512, 00:09:57.991 "num_blocks": 63488, 00:09:57.991 "uuid": "e0832701-0dbc-4562-a83b-d236872ea3b0", 00:09:57.991 "assigned_rate_limits": { 00:09:57.991 "rw_ios_per_sec": 0, 00:09:57.991 "rw_mbytes_per_sec": 0, 00:09:57.991 "r_mbytes_per_sec": 0, 00:09:57.991 "w_mbytes_per_sec": 0 00:09:57.991 }, 00:09:57.991 "claimed": false, 00:09:57.991 "zoned": false, 00:09:57.991 "supported_io_types": { 00:09:57.991 "read": true, 00:09:57.991 "write": true, 00:09:57.991 "unmap": false, 00:09:57.991 "flush": false, 00:09:57.991 "reset": true, 00:09:57.991 "nvme_admin": false, 00:09:57.991 "nvme_io": false, 00:09:57.991 "nvme_io_md": false, 00:09:57.991 "write_zeroes": true, 00:09:57.991 "zcopy": false, 00:09:57.991 "get_zone_info": false, 00:09:57.991 "zone_management": false, 00:09:57.991 "zone_append": false, 00:09:57.991 "compare": false, 00:09:57.991 "compare_and_write": false, 00:09:57.991 "abort": false, 00:09:57.991 "seek_hole": false, 00:09:57.991 "seek_data": false, 00:09:57.991 "copy": false, 00:09:57.991 "nvme_iov_md": false 00:09:57.991 }, 00:09:57.991 "memory_domains": [ 00:09:57.991 { 00:09:57.991 "dma_device_id": "system", 00:09:57.991 "dma_device_type": 1 00:09:57.991 }, 00:09:57.991 { 00:09:57.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.991 "dma_device_type": 2 00:09:57.991 }, 00:09:57.991 { 00:09:57.991 "dma_device_id": "system", 00:09:57.991 "dma_device_type": 1 00:09:57.991 }, 00:09:57.991 { 00:09:57.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.991 "dma_device_type": 2 00:09:57.991 }, 00:09:57.991 { 00:09:57.991 "dma_device_id": "system", 00:09:57.991 "dma_device_type": 1 00:09:57.991 }, 00:09:57.991 { 00:09:57.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.991 "dma_device_type": 2 00:09:57.991 } 00:09:57.991 ], 00:09:57.991 "driver_specific": { 00:09:57.991 "raid": { 00:09:57.991 "uuid": "e0832701-0dbc-4562-a83b-d236872ea3b0", 00:09:57.991 "strip_size_kb": 0, 00:09:57.991 "state": "online", 00:09:57.991 "raid_level": "raid1", 00:09:57.991 "superblock": true, 00:09:57.991 "num_base_bdevs": 3, 00:09:57.991 "num_base_bdevs_discovered": 3, 00:09:57.991 "num_base_bdevs_operational": 3, 00:09:57.991 "base_bdevs_list": [ 00:09:57.991 { 00:09:57.991 "name": "NewBaseBdev", 00:09:57.991 "uuid": "fd803916-ebd8-4af0-ae6a-48b4f6325bf1", 00:09:57.991 "is_configured": true, 00:09:57.991 "data_offset": 2048, 00:09:57.991 "data_size": 63488 00:09:57.991 }, 00:09:57.991 { 00:09:57.991 "name": "BaseBdev2", 00:09:57.991 "uuid": "23360414-8dcd-40ef-a190-281c570cad31", 00:09:57.991 "is_configured": true, 00:09:57.991 "data_offset": 2048, 00:09:57.991 "data_size": 63488 00:09:57.991 }, 00:09:57.991 { 00:09:57.991 "name": "BaseBdev3", 00:09:57.991 "uuid": "0a664234-7c45-445c-96aa-1d851453a9c1", 00:09:57.991 "is_configured": true, 00:09:57.991 "data_offset": 2048, 00:09:57.991 "data_size": 63488 00:09:57.991 } 00:09:57.991 ] 00:09:57.991 } 00:09:57.991 } 00:09:57.991 }' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:57.991 BaseBdev2 00:09:57.991 BaseBdev3' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.991 [2024-12-07 17:26:31.342656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.991 [2024-12-07 17:26:31.342710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.991 [2024-12-07 17:26:31.342813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.991 [2024-12-07 17:26:31.343167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.991 [2024-12-07 17:26:31.343181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68037 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68037 ']' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68037 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.991 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68037 00:09:58.250 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.250 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.250 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68037' 00:09:58.250 killing process with pid 68037 00:09:58.250 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68037 00:09:58.250 17:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68037 00:09:58.250 [2024-12-07 17:26:31.387468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.509 [2024-12-07 17:26:31.719873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.887 17:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:59.887 00:09:59.887 real 0m10.537s 00:09:59.887 user 0m16.494s 00:09:59.887 sys 0m1.914s 00:09:59.887 ************************************ 00:09:59.887 END TEST raid_state_function_test_sb 00:09:59.887 ************************************ 00:09:59.887 17:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.888 17:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.888 17:26:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:59.888 17:26:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:59.888 17:26:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.888 17:26:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.888 ************************************ 00:09:59.888 START TEST raid_superblock_test 00:09:59.888 ************************************ 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68652 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68652 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68652 ']' 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.888 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.888 [2024-12-07 17:26:33.096700] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:59.888 [2024-12-07 17:26:33.096894] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68652 ] 00:09:59.888 [2024-12-07 17:26:33.255621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.150 [2024-12-07 17:26:33.395544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.409 [2024-12-07 17:26:33.634854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.409 [2024-12-07 17:26:33.635010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.669 malloc1 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.669 [2024-12-07 17:26:33.986927] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:00.669 [2024-12-07 17:26:33.987097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.669 [2024-12-07 17:26:33.987139] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:00.669 [2024-12-07 17:26:33.987169] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.669 [2024-12-07 17:26:33.989521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.669 [2024-12-07 17:26:33.989600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:00.669 pt1 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.669 17:26:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.669 malloc2 00:10:00.669 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.669 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.669 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.669 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.929 [2024-12-07 17:26:34.054158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.929 [2024-12-07 17:26:34.054221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.929 [2024-12-07 17:26:34.054248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:00.929 [2024-12-07 17:26:34.054257] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.929 [2024-12-07 17:26:34.056648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.929 [2024-12-07 17:26:34.056684] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:00.929 pt2 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.929 malloc3 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.929 [2024-12-07 17:26:34.127116] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:00.929 [2024-12-07 17:26:34.127255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.929 [2024-12-07 17:26:34.127296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:00.929 [2024-12-07 17:26:34.127330] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.929 [2024-12-07 17:26:34.129604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.929 [2024-12-07 17:26:34.129683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:00.929 pt3 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.929 [2024-12-07 17:26:34.139119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:00.929 [2024-12-07 17:26:34.141081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.929 [2024-12-07 17:26:34.141181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:00.929 [2024-12-07 17:26:34.141357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:00.929 [2024-12-07 17:26:34.141406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:00.929 [2024-12-07 17:26:34.141645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:00.929 [2024-12-07 17:26:34.141855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:00.929 [2024-12-07 17:26:34.141898] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:00.929 [2024-12-07 17:26:34.142107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.929 "name": "raid_bdev1", 00:10:00.929 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:00.929 "strip_size_kb": 0, 00:10:00.929 "state": "online", 00:10:00.929 "raid_level": "raid1", 00:10:00.929 "superblock": true, 00:10:00.929 "num_base_bdevs": 3, 00:10:00.929 "num_base_bdevs_discovered": 3, 00:10:00.929 "num_base_bdevs_operational": 3, 00:10:00.929 "base_bdevs_list": [ 00:10:00.929 { 00:10:00.929 "name": "pt1", 00:10:00.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.929 "is_configured": true, 00:10:00.929 "data_offset": 2048, 00:10:00.929 "data_size": 63488 00:10:00.929 }, 00:10:00.929 { 00:10:00.929 "name": "pt2", 00:10:00.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.929 "is_configured": true, 00:10:00.929 "data_offset": 2048, 00:10:00.929 "data_size": 63488 00:10:00.929 }, 00:10:00.929 { 00:10:00.929 "name": "pt3", 00:10:00.929 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.929 "is_configured": true, 00:10:00.929 "data_offset": 2048, 00:10:00.929 "data_size": 63488 00:10:00.929 } 00:10:00.929 ] 00:10:00.929 }' 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.929 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.499 [2024-12-07 17:26:34.610792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.499 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.499 "name": "raid_bdev1", 00:10:01.499 "aliases": [ 00:10:01.499 "3908ced4-f088-4f65-85b5-f351102febd6" 00:10:01.499 ], 00:10:01.499 "product_name": "Raid Volume", 00:10:01.499 "block_size": 512, 00:10:01.499 "num_blocks": 63488, 00:10:01.499 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:01.499 "assigned_rate_limits": { 00:10:01.499 "rw_ios_per_sec": 0, 00:10:01.499 "rw_mbytes_per_sec": 0, 00:10:01.499 "r_mbytes_per_sec": 0, 00:10:01.499 "w_mbytes_per_sec": 0 00:10:01.499 }, 00:10:01.499 "claimed": false, 00:10:01.499 "zoned": false, 00:10:01.499 "supported_io_types": { 00:10:01.499 "read": true, 00:10:01.499 "write": true, 00:10:01.499 "unmap": false, 00:10:01.499 "flush": false, 00:10:01.499 "reset": true, 00:10:01.499 "nvme_admin": false, 00:10:01.499 "nvme_io": false, 00:10:01.499 "nvme_io_md": false, 00:10:01.499 "write_zeroes": true, 00:10:01.499 "zcopy": false, 00:10:01.499 "get_zone_info": false, 00:10:01.499 "zone_management": false, 00:10:01.499 "zone_append": false, 00:10:01.499 "compare": false, 00:10:01.499 "compare_and_write": false, 00:10:01.499 "abort": false, 00:10:01.499 "seek_hole": false, 00:10:01.499 "seek_data": false, 00:10:01.499 "copy": false, 00:10:01.499 "nvme_iov_md": false 00:10:01.499 }, 00:10:01.499 "memory_domains": [ 00:10:01.499 { 00:10:01.499 "dma_device_id": "system", 00:10:01.499 "dma_device_type": 1 00:10:01.499 }, 00:10:01.499 { 00:10:01.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.499 "dma_device_type": 2 00:10:01.499 }, 00:10:01.499 { 00:10:01.499 "dma_device_id": "system", 00:10:01.499 "dma_device_type": 1 00:10:01.499 }, 00:10:01.499 { 00:10:01.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.499 "dma_device_type": 2 00:10:01.499 }, 00:10:01.499 { 00:10:01.499 "dma_device_id": "system", 00:10:01.499 "dma_device_type": 1 00:10:01.499 }, 00:10:01.499 { 00:10:01.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.499 "dma_device_type": 2 00:10:01.499 } 00:10:01.499 ], 00:10:01.499 "driver_specific": { 00:10:01.499 "raid": { 00:10:01.499 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:01.499 "strip_size_kb": 0, 00:10:01.499 "state": "online", 00:10:01.499 "raid_level": "raid1", 00:10:01.499 "superblock": true, 00:10:01.499 "num_base_bdevs": 3, 00:10:01.499 "num_base_bdevs_discovered": 3, 00:10:01.499 "num_base_bdevs_operational": 3, 00:10:01.499 "base_bdevs_list": [ 00:10:01.499 { 00:10:01.499 "name": "pt1", 00:10:01.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.499 "is_configured": true, 00:10:01.499 "data_offset": 2048, 00:10:01.499 "data_size": 63488 00:10:01.499 }, 00:10:01.499 { 00:10:01.499 "name": "pt2", 00:10:01.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.499 "is_configured": true, 00:10:01.499 "data_offset": 2048, 00:10:01.499 "data_size": 63488 00:10:01.499 }, 00:10:01.499 { 00:10:01.499 "name": "pt3", 00:10:01.500 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.500 "is_configured": true, 00:10:01.500 "data_offset": 2048, 00:10:01.500 "data_size": 63488 00:10:01.500 } 00:10:01.500 ] 00:10:01.500 } 00:10:01.500 } 00:10:01.500 }' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:01.500 pt2 00:10:01.500 pt3' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.500 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.500 [2024-12-07 17:26:34.878200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3908ced4-f088-4f65-85b5-f351102febd6 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3908ced4-f088-4f65-85b5-f351102febd6 ']' 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.759 [2024-12-07 17:26:34.925872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.759 [2024-12-07 17:26:34.925917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.759 [2024-12-07 17:26:34.926028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.759 [2024-12-07 17:26:34.926115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.759 [2024-12-07 17:26:34.926127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.759 17:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.759 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.759 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.759 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:01.759 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.759 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.759 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.759 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:01.759 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.760 [2024-12-07 17:26:35.073631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:01.760 [2024-12-07 17:26:35.075739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:01.760 [2024-12-07 17:26:35.075890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:01.760 [2024-12-07 17:26:35.075971] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:01.760 [2024-12-07 17:26:35.076024] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:01.760 [2024-12-07 17:26:35.076043] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:01.760 [2024-12-07 17:26:35.076059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.760 [2024-12-07 17:26:35.076069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:01.760 request: 00:10:01.760 { 00:10:01.760 "name": "raid_bdev1", 00:10:01.760 "raid_level": "raid1", 00:10:01.760 "base_bdevs": [ 00:10:01.760 "malloc1", 00:10:01.760 "malloc2", 00:10:01.760 "malloc3" 00:10:01.760 ], 00:10:01.760 "superblock": false, 00:10:01.760 "method": "bdev_raid_create", 00:10:01.760 "req_id": 1 00:10:01.760 } 00:10:01.760 Got JSON-RPC error response 00:10:01.760 response: 00:10:01.760 { 00:10:01.760 "code": -17, 00:10:01.760 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:01.760 } 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.760 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.760 [2024-12-07 17:26:35.137554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:01.760 [2024-12-07 17:26:35.137737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.760 [2024-12-07 17:26:35.137776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:01.760 [2024-12-07 17:26:35.137811] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.019 [2024-12-07 17:26:35.140391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.019 [2024-12-07 17:26:35.140476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.019 [2024-12-07 17:26:35.140610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:02.019 [2024-12-07 17:26:35.140686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:02.019 pt1 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.019 "name": "raid_bdev1", 00:10:02.019 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:02.019 "strip_size_kb": 0, 00:10:02.019 "state": "configuring", 00:10:02.019 "raid_level": "raid1", 00:10:02.019 "superblock": true, 00:10:02.019 "num_base_bdevs": 3, 00:10:02.019 "num_base_bdevs_discovered": 1, 00:10:02.019 "num_base_bdevs_operational": 3, 00:10:02.019 "base_bdevs_list": [ 00:10:02.019 { 00:10:02.019 "name": "pt1", 00:10:02.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.019 "is_configured": true, 00:10:02.019 "data_offset": 2048, 00:10:02.019 "data_size": 63488 00:10:02.019 }, 00:10:02.019 { 00:10:02.019 "name": null, 00:10:02.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.019 "is_configured": false, 00:10:02.019 "data_offset": 2048, 00:10:02.019 "data_size": 63488 00:10:02.019 }, 00:10:02.019 { 00:10:02.019 "name": null, 00:10:02.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.019 "is_configured": false, 00:10:02.019 "data_offset": 2048, 00:10:02.019 "data_size": 63488 00:10:02.019 } 00:10:02.019 ] 00:10:02.019 }' 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.019 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.278 [2024-12-07 17:26:35.620719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.278 [2024-12-07 17:26:35.620808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.278 [2024-12-07 17:26:35.620835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:02.278 [2024-12-07 17:26:35.620845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.278 [2024-12-07 17:26:35.621388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.278 [2024-12-07 17:26:35.621419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.278 [2024-12-07 17:26:35.621522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:02.278 [2024-12-07 17:26:35.621552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.278 pt2 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.278 [2024-12-07 17:26:35.632660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.278 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.536 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.536 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.536 "name": "raid_bdev1", 00:10:02.536 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:02.536 "strip_size_kb": 0, 00:10:02.536 "state": "configuring", 00:10:02.536 "raid_level": "raid1", 00:10:02.536 "superblock": true, 00:10:02.536 "num_base_bdevs": 3, 00:10:02.536 "num_base_bdevs_discovered": 1, 00:10:02.536 "num_base_bdevs_operational": 3, 00:10:02.536 "base_bdevs_list": [ 00:10:02.536 { 00:10:02.536 "name": "pt1", 00:10:02.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.536 "is_configured": true, 00:10:02.536 "data_offset": 2048, 00:10:02.536 "data_size": 63488 00:10:02.536 }, 00:10:02.536 { 00:10:02.536 "name": null, 00:10:02.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.536 "is_configured": false, 00:10:02.536 "data_offset": 0, 00:10:02.536 "data_size": 63488 00:10:02.536 }, 00:10:02.536 { 00:10:02.536 "name": null, 00:10:02.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.536 "is_configured": false, 00:10:02.536 "data_offset": 2048, 00:10:02.536 "data_size": 63488 00:10:02.536 } 00:10:02.536 ] 00:10:02.536 }' 00:10:02.536 17:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.536 17:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.795 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:02.795 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.795 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.795 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.795 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.795 [2024-12-07 17:26:36.047980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.795 [2024-12-07 17:26:36.048160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.795 [2024-12-07 17:26:36.048207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:02.795 [2024-12-07 17:26:36.048240] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.795 [2024-12-07 17:26:36.048829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.795 [2024-12-07 17:26:36.048892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.795 [2024-12-07 17:26:36.049039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:02.795 [2024-12-07 17:26:36.049113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.795 pt2 00:10:02.795 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.795 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:02.795 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.795 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:02.795 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.796 [2024-12-07 17:26:36.059893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:02.796 [2024-12-07 17:26:36.059998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.796 [2024-12-07 17:26:36.060015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:02.796 [2024-12-07 17:26:36.060026] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.796 [2024-12-07 17:26:36.060436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.796 [2024-12-07 17:26:36.060459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:02.796 [2024-12-07 17:26:36.060530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:02.796 [2024-12-07 17:26:36.060553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:02.796 [2024-12-07 17:26:36.060679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:02.796 [2024-12-07 17:26:36.060693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:02.796 [2024-12-07 17:26:36.060961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:02.796 [2024-12-07 17:26:36.061116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:02.796 [2024-12-07 17:26:36.061131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:02.796 [2024-12-07 17:26:36.061277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.796 pt3 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.796 "name": "raid_bdev1", 00:10:02.796 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:02.796 "strip_size_kb": 0, 00:10:02.796 "state": "online", 00:10:02.796 "raid_level": "raid1", 00:10:02.796 "superblock": true, 00:10:02.796 "num_base_bdevs": 3, 00:10:02.796 "num_base_bdevs_discovered": 3, 00:10:02.796 "num_base_bdevs_operational": 3, 00:10:02.796 "base_bdevs_list": [ 00:10:02.796 { 00:10:02.796 "name": "pt1", 00:10:02.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.796 "is_configured": true, 00:10:02.796 "data_offset": 2048, 00:10:02.796 "data_size": 63488 00:10:02.796 }, 00:10:02.796 { 00:10:02.796 "name": "pt2", 00:10:02.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.796 "is_configured": true, 00:10:02.796 "data_offset": 2048, 00:10:02.796 "data_size": 63488 00:10:02.796 }, 00:10:02.796 { 00:10:02.796 "name": "pt3", 00:10:02.796 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.796 "is_configured": true, 00:10:02.796 "data_offset": 2048, 00:10:02.796 "data_size": 63488 00:10:02.796 } 00:10:02.796 ] 00:10:02.796 }' 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.796 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.364 [2024-12-07 17:26:36.483510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.364 "name": "raid_bdev1", 00:10:03.364 "aliases": [ 00:10:03.364 "3908ced4-f088-4f65-85b5-f351102febd6" 00:10:03.364 ], 00:10:03.364 "product_name": "Raid Volume", 00:10:03.364 "block_size": 512, 00:10:03.364 "num_blocks": 63488, 00:10:03.364 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:03.364 "assigned_rate_limits": { 00:10:03.364 "rw_ios_per_sec": 0, 00:10:03.364 "rw_mbytes_per_sec": 0, 00:10:03.364 "r_mbytes_per_sec": 0, 00:10:03.364 "w_mbytes_per_sec": 0 00:10:03.364 }, 00:10:03.364 "claimed": false, 00:10:03.364 "zoned": false, 00:10:03.364 "supported_io_types": { 00:10:03.364 "read": true, 00:10:03.364 "write": true, 00:10:03.364 "unmap": false, 00:10:03.364 "flush": false, 00:10:03.364 "reset": true, 00:10:03.364 "nvme_admin": false, 00:10:03.364 "nvme_io": false, 00:10:03.364 "nvme_io_md": false, 00:10:03.364 "write_zeroes": true, 00:10:03.364 "zcopy": false, 00:10:03.364 "get_zone_info": false, 00:10:03.364 "zone_management": false, 00:10:03.364 "zone_append": false, 00:10:03.364 "compare": false, 00:10:03.364 "compare_and_write": false, 00:10:03.364 "abort": false, 00:10:03.364 "seek_hole": false, 00:10:03.364 "seek_data": false, 00:10:03.364 "copy": false, 00:10:03.364 "nvme_iov_md": false 00:10:03.364 }, 00:10:03.364 "memory_domains": [ 00:10:03.364 { 00:10:03.364 "dma_device_id": "system", 00:10:03.364 "dma_device_type": 1 00:10:03.364 }, 00:10:03.364 { 00:10:03.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.364 "dma_device_type": 2 00:10:03.364 }, 00:10:03.364 { 00:10:03.364 "dma_device_id": "system", 00:10:03.364 "dma_device_type": 1 00:10:03.364 }, 00:10:03.364 { 00:10:03.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.364 "dma_device_type": 2 00:10:03.364 }, 00:10:03.364 { 00:10:03.364 "dma_device_id": "system", 00:10:03.364 "dma_device_type": 1 00:10:03.364 }, 00:10:03.364 { 00:10:03.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.364 "dma_device_type": 2 00:10:03.364 } 00:10:03.364 ], 00:10:03.364 "driver_specific": { 00:10:03.364 "raid": { 00:10:03.364 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:03.364 "strip_size_kb": 0, 00:10:03.364 "state": "online", 00:10:03.364 "raid_level": "raid1", 00:10:03.364 "superblock": true, 00:10:03.364 "num_base_bdevs": 3, 00:10:03.364 "num_base_bdevs_discovered": 3, 00:10:03.364 "num_base_bdevs_operational": 3, 00:10:03.364 "base_bdevs_list": [ 00:10:03.364 { 00:10:03.364 "name": "pt1", 00:10:03.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.364 "is_configured": true, 00:10:03.364 "data_offset": 2048, 00:10:03.364 "data_size": 63488 00:10:03.364 }, 00:10:03.364 { 00:10:03.364 "name": "pt2", 00:10:03.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.364 "is_configured": true, 00:10:03.364 "data_offset": 2048, 00:10:03.364 "data_size": 63488 00:10:03.364 }, 00:10:03.364 { 00:10:03.364 "name": "pt3", 00:10:03.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.364 "is_configured": true, 00:10:03.364 "data_offset": 2048, 00:10:03.364 "data_size": 63488 00:10:03.364 } 00:10:03.364 ] 00:10:03.364 } 00:10:03.364 } 00:10:03.364 }' 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:03.364 pt2 00:10:03.364 pt3' 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.364 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.365 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.365 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:03.630 [2024-12-07 17:26:36.755004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3908ced4-f088-4f65-85b5-f351102febd6 '!=' 3908ced4-f088-4f65-85b5-f351102febd6 ']' 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.630 [2024-12-07 17:26:36.802744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.630 "name": "raid_bdev1", 00:10:03.630 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:03.630 "strip_size_kb": 0, 00:10:03.630 "state": "online", 00:10:03.630 "raid_level": "raid1", 00:10:03.630 "superblock": true, 00:10:03.630 "num_base_bdevs": 3, 00:10:03.630 "num_base_bdevs_discovered": 2, 00:10:03.630 "num_base_bdevs_operational": 2, 00:10:03.630 "base_bdevs_list": [ 00:10:03.630 { 00:10:03.630 "name": null, 00:10:03.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.630 "is_configured": false, 00:10:03.630 "data_offset": 0, 00:10:03.630 "data_size": 63488 00:10:03.630 }, 00:10:03.630 { 00:10:03.630 "name": "pt2", 00:10:03.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.630 "is_configured": true, 00:10:03.630 "data_offset": 2048, 00:10:03.630 "data_size": 63488 00:10:03.630 }, 00:10:03.630 { 00:10:03.630 "name": "pt3", 00:10:03.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.630 "is_configured": true, 00:10:03.630 "data_offset": 2048, 00:10:03.630 "data_size": 63488 00:10:03.630 } 00:10:03.630 ] 00:10:03.630 }' 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.630 17:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.890 [2024-12-07 17:26:37.198063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.890 [2024-12-07 17:26:37.198203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.890 [2024-12-07 17:26:37.198330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.890 [2024-12-07 17:26:37.198415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.890 [2024-12-07 17:26:37.198467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.890 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.150 [2024-12-07 17:26:37.281852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.150 [2024-12-07 17:26:37.281944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.150 [2024-12-07 17:26:37.281964] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:04.150 [2024-12-07 17:26:37.281975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.150 [2024-12-07 17:26:37.284605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.150 [2024-12-07 17:26:37.284652] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.150 [2024-12-07 17:26:37.284743] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:04.150 [2024-12-07 17:26:37.284799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.150 pt2 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.150 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.151 "name": "raid_bdev1", 00:10:04.151 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:04.151 "strip_size_kb": 0, 00:10:04.151 "state": "configuring", 00:10:04.151 "raid_level": "raid1", 00:10:04.151 "superblock": true, 00:10:04.151 "num_base_bdevs": 3, 00:10:04.151 "num_base_bdevs_discovered": 1, 00:10:04.151 "num_base_bdevs_operational": 2, 00:10:04.151 "base_bdevs_list": [ 00:10:04.151 { 00:10:04.151 "name": null, 00:10:04.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.151 "is_configured": false, 00:10:04.151 "data_offset": 2048, 00:10:04.151 "data_size": 63488 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "name": "pt2", 00:10:04.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.151 "is_configured": true, 00:10:04.151 "data_offset": 2048, 00:10:04.151 "data_size": 63488 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "name": null, 00:10:04.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.151 "is_configured": false, 00:10:04.151 "data_offset": 2048, 00:10:04.151 "data_size": 63488 00:10:04.151 } 00:10:04.151 ] 00:10:04.151 }' 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.151 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.411 [2024-12-07 17:26:37.681199] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:04.411 [2024-12-07 17:26:37.681372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.411 [2024-12-07 17:26:37.681412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:04.411 [2024-12-07 17:26:37.681443] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.411 [2024-12-07 17:26:37.682014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.411 [2024-12-07 17:26:37.682075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:04.411 [2024-12-07 17:26:37.682214] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:04.411 [2024-12-07 17:26:37.682274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:04.411 [2024-12-07 17:26:37.682434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:04.411 [2024-12-07 17:26:37.682471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:04.411 [2024-12-07 17:26:37.682756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:04.411 [2024-12-07 17:26:37.682987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:04.411 [2024-12-07 17:26:37.683030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:04.411 [2024-12-07 17:26:37.683228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.411 pt3 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.411 "name": "raid_bdev1", 00:10:04.411 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:04.411 "strip_size_kb": 0, 00:10:04.411 "state": "online", 00:10:04.411 "raid_level": "raid1", 00:10:04.411 "superblock": true, 00:10:04.411 "num_base_bdevs": 3, 00:10:04.411 "num_base_bdevs_discovered": 2, 00:10:04.411 "num_base_bdevs_operational": 2, 00:10:04.411 "base_bdevs_list": [ 00:10:04.411 { 00:10:04.411 "name": null, 00:10:04.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.411 "is_configured": false, 00:10:04.411 "data_offset": 2048, 00:10:04.411 "data_size": 63488 00:10:04.411 }, 00:10:04.411 { 00:10:04.411 "name": "pt2", 00:10:04.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.411 "is_configured": true, 00:10:04.411 "data_offset": 2048, 00:10:04.411 "data_size": 63488 00:10:04.411 }, 00:10:04.411 { 00:10:04.411 "name": "pt3", 00:10:04.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.411 "is_configured": true, 00:10:04.411 "data_offset": 2048, 00:10:04.411 "data_size": 63488 00:10:04.411 } 00:10:04.411 ] 00:10:04.411 }' 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.411 17:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.981 [2024-12-07 17:26:38.148429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.981 [2024-12-07 17:26:38.148563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.981 [2024-12-07 17:26:38.148675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.981 [2024-12-07 17:26:38.148752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.981 [2024-12-07 17:26:38.148763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.981 [2024-12-07 17:26:38.216327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:04.981 [2024-12-07 17:26:38.216430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.981 [2024-12-07 17:26:38.216452] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:04.981 [2024-12-07 17:26:38.216463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.981 [2024-12-07 17:26:38.219151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.981 [2024-12-07 17:26:38.219187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:04.981 [2024-12-07 17:26:38.219290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:04.981 [2024-12-07 17:26:38.219344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:04.981 [2024-12-07 17:26:38.219488] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:04.981 [2024-12-07 17:26:38.219499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.981 [2024-12-07 17:26:38.219518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:04.981 [2024-12-07 17:26:38.219581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.981 pt1 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.981 "name": "raid_bdev1", 00:10:04.981 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:04.981 "strip_size_kb": 0, 00:10:04.981 "state": "configuring", 00:10:04.981 "raid_level": "raid1", 00:10:04.981 "superblock": true, 00:10:04.981 "num_base_bdevs": 3, 00:10:04.981 "num_base_bdevs_discovered": 1, 00:10:04.981 "num_base_bdevs_operational": 2, 00:10:04.981 "base_bdevs_list": [ 00:10:04.981 { 00:10:04.981 "name": null, 00:10:04.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.981 "is_configured": false, 00:10:04.981 "data_offset": 2048, 00:10:04.981 "data_size": 63488 00:10:04.981 }, 00:10:04.981 { 00:10:04.981 "name": "pt2", 00:10:04.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.981 "is_configured": true, 00:10:04.981 "data_offset": 2048, 00:10:04.981 "data_size": 63488 00:10:04.981 }, 00:10:04.981 { 00:10:04.981 "name": null, 00:10:04.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.981 "is_configured": false, 00:10:04.981 "data_offset": 2048, 00:10:04.981 "data_size": 63488 00:10:04.981 } 00:10:04.981 ] 00:10:04.981 }' 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.981 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.241 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:05.241 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:05.241 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.241 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.500 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.501 [2024-12-07 17:26:38.655570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:05.501 [2024-12-07 17:26:38.655758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.501 [2024-12-07 17:26:38.655804] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:05.501 [2024-12-07 17:26:38.655833] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.501 [2024-12-07 17:26:38.656439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.501 [2024-12-07 17:26:38.656509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:05.501 [2024-12-07 17:26:38.656637] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:05.501 [2024-12-07 17:26:38.656696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:05.501 [2024-12-07 17:26:38.656875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:05.501 [2024-12-07 17:26:38.656912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:05.501 [2024-12-07 17:26:38.657227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:05.501 [2024-12-07 17:26:38.657432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:05.501 [2024-12-07 17:26:38.657481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:05.501 [2024-12-07 17:26:38.657656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.501 pt3 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.501 "name": "raid_bdev1", 00:10:05.501 "uuid": "3908ced4-f088-4f65-85b5-f351102febd6", 00:10:05.501 "strip_size_kb": 0, 00:10:05.501 "state": "online", 00:10:05.501 "raid_level": "raid1", 00:10:05.501 "superblock": true, 00:10:05.501 "num_base_bdevs": 3, 00:10:05.501 "num_base_bdevs_discovered": 2, 00:10:05.501 "num_base_bdevs_operational": 2, 00:10:05.501 "base_bdevs_list": [ 00:10:05.501 { 00:10:05.501 "name": null, 00:10:05.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.501 "is_configured": false, 00:10:05.501 "data_offset": 2048, 00:10:05.501 "data_size": 63488 00:10:05.501 }, 00:10:05.501 { 00:10:05.501 "name": "pt2", 00:10:05.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.501 "is_configured": true, 00:10:05.501 "data_offset": 2048, 00:10:05.501 "data_size": 63488 00:10:05.501 }, 00:10:05.501 { 00:10:05.501 "name": "pt3", 00:10:05.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.501 "is_configured": true, 00:10:05.501 "data_offset": 2048, 00:10:05.501 "data_size": 63488 00:10:05.501 } 00:10:05.501 ] 00:10:05.501 }' 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.501 17:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.760 17:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:05.760 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.760 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.760 17:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:05.760 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.019 17:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:06.019 17:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:06.019 17:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.019 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.019 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.019 [2024-12-07 17:26:39.175017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.019 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.019 17:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3908ced4-f088-4f65-85b5-f351102febd6 '!=' 3908ced4-f088-4f65-85b5-f351102febd6 ']' 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68652 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68652 ']' 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68652 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68652 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68652' 00:10:06.020 killing process with pid 68652 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68652 00:10:06.020 [2024-12-07 17:26:39.239389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.020 [2024-12-07 17:26:39.239518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.020 17:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68652 00:10:06.020 [2024-12-07 17:26:39.239590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.020 [2024-12-07 17:26:39.239604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:06.278 [2024-12-07 17:26:39.567753] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.672 17:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:07.672 00:10:07.672 real 0m7.762s 00:10:07.672 user 0m11.939s 00:10:07.672 sys 0m1.406s 00:10:07.672 ************************************ 00:10:07.672 END TEST raid_superblock_test 00:10:07.672 ************************************ 00:10:07.672 17:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.672 17:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.672 17:26:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:07.672 17:26:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:07.672 17:26:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.673 17:26:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.673 ************************************ 00:10:07.673 START TEST raid_read_error_test 00:10:07.673 ************************************ 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gX1XgOsaXM 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69109 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69109 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69109 ']' 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.673 17:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.673 [2024-12-07 17:26:40.946453] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:07.673 [2024-12-07 17:26:40.946585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69109 ] 00:10:07.933 [2024-12-07 17:26:41.117704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.933 [2024-12-07 17:26:41.262791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.192 [2024-12-07 17:26:41.500047] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.192 [2024-12-07 17:26:41.500093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.456 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.456 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:08.456 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.456 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:08.456 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.456 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.456 BaseBdev1_malloc 00:10:08.457 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.457 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:08.457 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.457 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.738 true 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.738 [2024-12-07 17:26:41.841599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:08.738 [2024-12-07 17:26:41.841667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.738 [2024-12-07 17:26:41.841687] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:08.738 [2024-12-07 17:26:41.841700] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.738 [2024-12-07 17:26:41.844023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.738 [2024-12-07 17:26:41.844063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:08.738 BaseBdev1 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.738 BaseBdev2_malloc 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.738 true 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.738 [2024-12-07 17:26:41.915330] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:08.738 [2024-12-07 17:26:41.915386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.738 [2024-12-07 17:26:41.915402] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:08.738 [2024-12-07 17:26:41.915416] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.738 [2024-12-07 17:26:41.917707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.738 [2024-12-07 17:26:41.917745] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:08.738 BaseBdev2 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.738 BaseBdev3_malloc 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.738 true 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.738 17:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.738 [2024-12-07 17:26:41.998017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:08.738 [2024-12-07 17:26:41.998073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.738 [2024-12-07 17:26:41.998092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:08.738 [2024-12-07 17:26:41.998104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.738 [2024-12-07 17:26:42.001591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.738 [2024-12-07 17:26:42.001631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:08.738 BaseBdev3 00:10:08.738 17:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.738 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:08.738 17:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.738 17:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.738 [2024-12-07 17:26:42.010178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.738 [2024-12-07 17:26:42.012190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.739 [2024-12-07 17:26:42.012265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.739 [2024-12-07 17:26:42.012469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:08.739 [2024-12-07 17:26:42.012486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:08.739 [2024-12-07 17:26:42.012730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:08.739 [2024-12-07 17:26:42.012902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:08.739 [2024-12-07 17:26:42.012920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:08.739 [2024-12-07 17:26:42.013073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.739 "name": "raid_bdev1", 00:10:08.739 "uuid": "4a0ef36e-ad16-45b6-8c13-7a3ddc5cd131", 00:10:08.739 "strip_size_kb": 0, 00:10:08.739 "state": "online", 00:10:08.739 "raid_level": "raid1", 00:10:08.739 "superblock": true, 00:10:08.739 "num_base_bdevs": 3, 00:10:08.739 "num_base_bdevs_discovered": 3, 00:10:08.739 "num_base_bdevs_operational": 3, 00:10:08.739 "base_bdevs_list": [ 00:10:08.739 { 00:10:08.739 "name": "BaseBdev1", 00:10:08.739 "uuid": "aedb0117-d95d-55fc-bd28-d45a23390d05", 00:10:08.739 "is_configured": true, 00:10:08.739 "data_offset": 2048, 00:10:08.739 "data_size": 63488 00:10:08.739 }, 00:10:08.739 { 00:10:08.739 "name": "BaseBdev2", 00:10:08.739 "uuid": "4e288bb1-c34d-5f51-baf8-b2bc4be7f079", 00:10:08.739 "is_configured": true, 00:10:08.739 "data_offset": 2048, 00:10:08.739 "data_size": 63488 00:10:08.739 }, 00:10:08.739 { 00:10:08.739 "name": "BaseBdev3", 00:10:08.739 "uuid": "898f2ca8-9a20-550f-b873-d3be0251db42", 00:10:08.739 "is_configured": true, 00:10:08.739 "data_offset": 2048, 00:10:08.739 "data_size": 63488 00:10:08.739 } 00:10:08.739 ] 00:10:08.739 }' 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.739 17:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:09.306 17:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:09.306 [2024-12-07 17:26:42.526869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.242 "name": "raid_bdev1", 00:10:10.242 "uuid": "4a0ef36e-ad16-45b6-8c13-7a3ddc5cd131", 00:10:10.242 "strip_size_kb": 0, 00:10:10.242 "state": "online", 00:10:10.242 "raid_level": "raid1", 00:10:10.242 "superblock": true, 00:10:10.242 "num_base_bdevs": 3, 00:10:10.242 "num_base_bdevs_discovered": 3, 00:10:10.242 "num_base_bdevs_operational": 3, 00:10:10.242 "base_bdevs_list": [ 00:10:10.242 { 00:10:10.242 "name": "BaseBdev1", 00:10:10.242 "uuid": "aedb0117-d95d-55fc-bd28-d45a23390d05", 00:10:10.242 "is_configured": true, 00:10:10.242 "data_offset": 2048, 00:10:10.242 "data_size": 63488 00:10:10.242 }, 00:10:10.242 { 00:10:10.242 "name": "BaseBdev2", 00:10:10.242 "uuid": "4e288bb1-c34d-5f51-baf8-b2bc4be7f079", 00:10:10.242 "is_configured": true, 00:10:10.242 "data_offset": 2048, 00:10:10.242 "data_size": 63488 00:10:10.242 }, 00:10:10.242 { 00:10:10.242 "name": "BaseBdev3", 00:10:10.242 "uuid": "898f2ca8-9a20-550f-b873-d3be0251db42", 00:10:10.242 "is_configured": true, 00:10:10.242 "data_offset": 2048, 00:10:10.242 "data_size": 63488 00:10:10.242 } 00:10:10.242 ] 00:10:10.242 }' 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.242 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.810 [2024-12-07 17:26:43.931130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.810 [2024-12-07 17:26:43.931178] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.810 [2024-12-07 17:26:43.933688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.810 [2024-12-07 17:26:43.933742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.810 [2024-12-07 17:26:43.933847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.810 [2024-12-07 17:26:43.933863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:10.810 { 00:10:10.810 "results": [ 00:10:10.810 { 00:10:10.810 "job": "raid_bdev1", 00:10:10.810 "core_mask": "0x1", 00:10:10.810 "workload": "randrw", 00:10:10.810 "percentage": 50, 00:10:10.810 "status": "finished", 00:10:10.810 "queue_depth": 1, 00:10:10.810 "io_size": 131072, 00:10:10.810 "runtime": 1.404924, 00:10:10.810 "iops": 9982.034615395565, 00:10:10.810 "mibps": 1247.7543269244456, 00:10:10.810 "io_failed": 0, 00:10:10.810 "io_timeout": 0, 00:10:10.810 "avg_latency_us": 97.61462745088271, 00:10:10.810 "min_latency_us": 23.252401746724892, 00:10:10.810 "max_latency_us": 1452.380786026201 00:10:10.810 } 00:10:10.810 ], 00:10:10.810 "core_count": 1 00:10:10.810 } 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69109 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69109 ']' 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69109 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69109 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.810 killing process with pid 69109 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69109' 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69109 00:10:10.810 [2024-12-07 17:26:43.966412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.810 17:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69109 00:10:11.069 [2024-12-07 17:26:44.210895] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.443 17:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gX1XgOsaXM 00:10:12.443 17:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:12.443 17:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:12.443 17:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:12.443 17:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:12.443 17:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.443 17:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:12.443 17:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:12.443 00:10:12.443 real 0m4.640s 00:10:12.443 user 0m5.383s 00:10:12.443 sys 0m0.663s 00:10:12.443 17:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.443 17:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.443 ************************************ 00:10:12.443 END TEST raid_read_error_test 00:10:12.443 ************************************ 00:10:12.443 17:26:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:12.443 17:26:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:12.443 17:26:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.443 17:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.443 ************************************ 00:10:12.443 START TEST raid_write_error_test 00:10:12.443 ************************************ 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TNn6PUO3wX 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69250 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69250 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69250 ']' 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.443 17:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.443 [2024-12-07 17:26:45.651649] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:12.443 [2024-12-07 17:26:45.651840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69250 ] 00:10:12.443 [2024-12-07 17:26:45.804967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.700 [2024-12-07 17:26:45.941256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.958 [2024-12-07 17:26:46.180901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.958 [2024-12-07 17:26:46.181075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.216 BaseBdev1_malloc 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.216 true 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.216 [2024-12-07 17:26:46.532858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:13.216 [2024-12-07 17:26:46.532944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.216 [2024-12-07 17:26:46.532968] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:13.216 [2024-12-07 17:26:46.532980] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.216 [2024-12-07 17:26:46.535294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.216 [2024-12-07 17:26:46.535332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:13.216 BaseBdev1 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.216 BaseBdev2_malloc 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.216 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.475 true 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.475 [2024-12-07 17:26:46.605320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:13.475 [2024-12-07 17:26:46.605473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.475 [2024-12-07 17:26:46.605496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:13.475 [2024-12-07 17:26:46.605509] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.475 [2024-12-07 17:26:46.607863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.475 [2024-12-07 17:26:46.607904] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:13.475 BaseBdev2 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.475 BaseBdev3_malloc 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.475 true 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.475 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.475 [2024-12-07 17:26:46.697503] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:13.475 [2024-12-07 17:26:46.697571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.475 [2024-12-07 17:26:46.697592] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:13.475 [2024-12-07 17:26:46.697604] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.475 [2024-12-07 17:26:46.700034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.476 [2024-12-07 17:26:46.700073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:13.476 BaseBdev3 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.476 [2024-12-07 17:26:46.709569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.476 [2024-12-07 17:26:46.711648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.476 [2024-12-07 17:26:46.711828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.476 [2024-12-07 17:26:46.712070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:13.476 [2024-12-07 17:26:46.712085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.476 [2024-12-07 17:26:46.712342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:13.476 [2024-12-07 17:26:46.712528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:13.476 [2024-12-07 17:26:46.712540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:13.476 [2024-12-07 17:26:46.712698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.476 "name": "raid_bdev1", 00:10:13.476 "uuid": "56697693-06f3-4b53-91a9-22111adab038", 00:10:13.476 "strip_size_kb": 0, 00:10:13.476 "state": "online", 00:10:13.476 "raid_level": "raid1", 00:10:13.476 "superblock": true, 00:10:13.476 "num_base_bdevs": 3, 00:10:13.476 "num_base_bdevs_discovered": 3, 00:10:13.476 "num_base_bdevs_operational": 3, 00:10:13.476 "base_bdevs_list": [ 00:10:13.476 { 00:10:13.476 "name": "BaseBdev1", 00:10:13.476 "uuid": "3e567c79-14f7-5afc-bbaa-e90bdafd8bb9", 00:10:13.476 "is_configured": true, 00:10:13.476 "data_offset": 2048, 00:10:13.476 "data_size": 63488 00:10:13.476 }, 00:10:13.476 { 00:10:13.476 "name": "BaseBdev2", 00:10:13.476 "uuid": "0161ecd9-71a1-5fd6-9c62-b466dc341320", 00:10:13.476 "is_configured": true, 00:10:13.476 "data_offset": 2048, 00:10:13.476 "data_size": 63488 00:10:13.476 }, 00:10:13.476 { 00:10:13.476 "name": "BaseBdev3", 00:10:13.476 "uuid": "e83177df-a75f-5ca7-8cdd-41c23f4fda7f", 00:10:13.476 "is_configured": true, 00:10:13.476 "data_offset": 2048, 00:10:13.476 "data_size": 63488 00:10:13.476 } 00:10:13.476 ] 00:10:13.476 }' 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.476 17:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.043 17:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:14.043 17:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:14.043 [2024-12-07 17:26:47.266195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.979 [2024-12-07 17:26:48.181474] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:14.979 [2024-12-07 17:26:48.181673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.979 [2024-12-07 17:26:48.181957] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.979 "name": "raid_bdev1", 00:10:14.979 "uuid": "56697693-06f3-4b53-91a9-22111adab038", 00:10:14.979 "strip_size_kb": 0, 00:10:14.979 "state": "online", 00:10:14.979 "raid_level": "raid1", 00:10:14.979 "superblock": true, 00:10:14.979 "num_base_bdevs": 3, 00:10:14.979 "num_base_bdevs_discovered": 2, 00:10:14.979 "num_base_bdevs_operational": 2, 00:10:14.979 "base_bdevs_list": [ 00:10:14.979 { 00:10:14.979 "name": null, 00:10:14.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.979 "is_configured": false, 00:10:14.979 "data_offset": 0, 00:10:14.979 "data_size": 63488 00:10:14.979 }, 00:10:14.979 { 00:10:14.979 "name": "BaseBdev2", 00:10:14.979 "uuid": "0161ecd9-71a1-5fd6-9c62-b466dc341320", 00:10:14.979 "is_configured": true, 00:10:14.979 "data_offset": 2048, 00:10:14.979 "data_size": 63488 00:10:14.979 }, 00:10:14.979 { 00:10:14.979 "name": "BaseBdev3", 00:10:14.979 "uuid": "e83177df-a75f-5ca7-8cdd-41c23f4fda7f", 00:10:14.979 "is_configured": true, 00:10:14.979 "data_offset": 2048, 00:10:14.979 "data_size": 63488 00:10:14.979 } 00:10:14.979 ] 00:10:14.979 }' 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.979 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.547 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.547 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 [2024-12-07 17:26:48.661055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.547 [2024-12-07 17:26:48.661207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.547 [2024-12-07 17:26:48.663815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.547 [2024-12-07 17:26:48.663939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.547 [2024-12-07 17:26:48.664045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.547 [2024-12-07 17:26:48.664103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:15.547 { 00:10:15.547 "results": [ 00:10:15.547 { 00:10:15.547 "job": "raid_bdev1", 00:10:15.547 "core_mask": "0x1", 00:10:15.547 "workload": "randrw", 00:10:15.547 "percentage": 50, 00:10:15.547 "status": "finished", 00:10:15.547 "queue_depth": 1, 00:10:15.547 "io_size": 131072, 00:10:15.547 "runtime": 1.395509, 00:10:15.547 "iops": 11521.960804265684, 00:10:15.547 "mibps": 1440.2451005332105, 00:10:15.547 "io_failed": 0, 00:10:15.547 "io_timeout": 0, 00:10:15.547 "avg_latency_us": 84.18043942966102, 00:10:15.547 "min_latency_us": 23.58777292576419, 00:10:15.547 "max_latency_us": 1452.380786026201 00:10:15.547 } 00:10:15.547 ], 00:10:15.547 "core_count": 1 00:10:15.547 } 00:10:15.547 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.547 17:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69250 00:10:15.547 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69250 ']' 00:10:15.547 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69250 00:10:15.547 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:15.547 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.547 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69250 00:10:15.547 killing process with pid 69250 00:10:15.548 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.548 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.548 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69250' 00:10:15.548 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69250 00:10:15.548 [2024-12-07 17:26:48.700800] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.548 17:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69250 00:10:15.805 [2024-12-07 17:26:48.950234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.205 17:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TNn6PUO3wX 00:10:17.205 17:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:17.205 17:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:17.205 17:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:17.205 17:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:17.205 17:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.205 17:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:17.205 17:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:17.205 00:10:17.205 real 0m4.685s 00:10:17.205 user 0m5.411s 00:10:17.205 sys 0m0.686s 00:10:17.205 ************************************ 00:10:17.205 END TEST raid_write_error_test 00:10:17.205 ************************************ 00:10:17.205 17:26:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.205 17:26:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.205 17:26:50 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:17.205 17:26:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:17.205 17:26:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:17.205 17:26:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:17.205 17:26:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.205 17:26:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.205 ************************************ 00:10:17.205 START TEST raid_state_function_test 00:10:17.205 ************************************ 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:17.205 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69394 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69394' 00:10:17.206 Process raid pid: 69394 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69394 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69394 ']' 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.206 17:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.206 [2024-12-07 17:26:50.406343] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:17.206 [2024-12-07 17:26:50.406474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.206 [2024-12-07 17:26:50.581368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.464 [2024-12-07 17:26:50.712636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.722 [2024-12-07 17:26:50.951588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.722 [2024-12-07 17:26:50.951630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.980 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.980 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:17.980 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.981 [2024-12-07 17:26:51.245631] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.981 [2024-12-07 17:26:51.245704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.981 [2024-12-07 17:26:51.245715] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.981 [2024-12-07 17:26:51.245725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.981 [2024-12-07 17:26:51.245731] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.981 [2024-12-07 17:26:51.245741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.981 [2024-12-07 17:26:51.245747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.981 [2024-12-07 17:26:51.245756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.981 "name": "Existed_Raid", 00:10:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.981 "strip_size_kb": 64, 00:10:17.981 "state": "configuring", 00:10:17.981 "raid_level": "raid0", 00:10:17.981 "superblock": false, 00:10:17.981 "num_base_bdevs": 4, 00:10:17.981 "num_base_bdevs_discovered": 0, 00:10:17.981 "num_base_bdevs_operational": 4, 00:10:17.981 "base_bdevs_list": [ 00:10:17.981 { 00:10:17.981 "name": "BaseBdev1", 00:10:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.981 "is_configured": false, 00:10:17.981 "data_offset": 0, 00:10:17.981 "data_size": 0 00:10:17.981 }, 00:10:17.981 { 00:10:17.981 "name": "BaseBdev2", 00:10:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.981 "is_configured": false, 00:10:17.981 "data_offset": 0, 00:10:17.981 "data_size": 0 00:10:17.981 }, 00:10:17.981 { 00:10:17.981 "name": "BaseBdev3", 00:10:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.981 "is_configured": false, 00:10:17.981 "data_offset": 0, 00:10:17.981 "data_size": 0 00:10:17.981 }, 00:10:17.981 { 00:10:17.981 "name": "BaseBdev4", 00:10:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.981 "is_configured": false, 00:10:17.981 "data_offset": 0, 00:10:17.981 "data_size": 0 00:10:17.981 } 00:10:17.981 ] 00:10:17.981 }' 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.981 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.547 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.547 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.547 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.547 [2024-12-07 17:26:51.640948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.547 [2024-12-07 17:26:51.641095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:18.547 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.547 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:18.547 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.547 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.547 [2024-12-07 17:26:51.652881] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.547 [2024-12-07 17:26:51.652976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.547 [2024-12-07 17:26:51.653009] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.547 [2024-12-07 17:26:51.653032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.547 [2024-12-07 17:26:51.653049] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.547 [2024-12-07 17:26:51.653069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.547 [2024-12-07 17:26:51.653086] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:18.548 [2024-12-07 17:26:51.653134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.548 [2024-12-07 17:26:51.704288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.548 BaseBdev1 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.548 [ 00:10:18.548 { 00:10:18.548 "name": "BaseBdev1", 00:10:18.548 "aliases": [ 00:10:18.548 "aa5698f1-70de-4d56-afde-d47b652c6985" 00:10:18.548 ], 00:10:18.548 "product_name": "Malloc disk", 00:10:18.548 "block_size": 512, 00:10:18.548 "num_blocks": 65536, 00:10:18.548 "uuid": "aa5698f1-70de-4d56-afde-d47b652c6985", 00:10:18.548 "assigned_rate_limits": { 00:10:18.548 "rw_ios_per_sec": 0, 00:10:18.548 "rw_mbytes_per_sec": 0, 00:10:18.548 "r_mbytes_per_sec": 0, 00:10:18.548 "w_mbytes_per_sec": 0 00:10:18.548 }, 00:10:18.548 "claimed": true, 00:10:18.548 "claim_type": "exclusive_write", 00:10:18.548 "zoned": false, 00:10:18.548 "supported_io_types": { 00:10:18.548 "read": true, 00:10:18.548 "write": true, 00:10:18.548 "unmap": true, 00:10:18.548 "flush": true, 00:10:18.548 "reset": true, 00:10:18.548 "nvme_admin": false, 00:10:18.548 "nvme_io": false, 00:10:18.548 "nvme_io_md": false, 00:10:18.548 "write_zeroes": true, 00:10:18.548 "zcopy": true, 00:10:18.548 "get_zone_info": false, 00:10:18.548 "zone_management": false, 00:10:18.548 "zone_append": false, 00:10:18.548 "compare": false, 00:10:18.548 "compare_and_write": false, 00:10:18.548 "abort": true, 00:10:18.548 "seek_hole": false, 00:10:18.548 "seek_data": false, 00:10:18.548 "copy": true, 00:10:18.548 "nvme_iov_md": false 00:10:18.548 }, 00:10:18.548 "memory_domains": [ 00:10:18.548 { 00:10:18.548 "dma_device_id": "system", 00:10:18.548 "dma_device_type": 1 00:10:18.548 }, 00:10:18.548 { 00:10:18.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.548 "dma_device_type": 2 00:10:18.548 } 00:10:18.548 ], 00:10:18.548 "driver_specific": {} 00:10:18.548 } 00:10:18.548 ] 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.548 "name": "Existed_Raid", 00:10:18.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.548 "strip_size_kb": 64, 00:10:18.548 "state": "configuring", 00:10:18.548 "raid_level": "raid0", 00:10:18.548 "superblock": false, 00:10:18.548 "num_base_bdevs": 4, 00:10:18.548 "num_base_bdevs_discovered": 1, 00:10:18.548 "num_base_bdevs_operational": 4, 00:10:18.548 "base_bdevs_list": [ 00:10:18.548 { 00:10:18.548 "name": "BaseBdev1", 00:10:18.548 "uuid": "aa5698f1-70de-4d56-afde-d47b652c6985", 00:10:18.548 "is_configured": true, 00:10:18.548 "data_offset": 0, 00:10:18.548 "data_size": 65536 00:10:18.548 }, 00:10:18.548 { 00:10:18.548 "name": "BaseBdev2", 00:10:18.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.548 "is_configured": false, 00:10:18.548 "data_offset": 0, 00:10:18.548 "data_size": 0 00:10:18.548 }, 00:10:18.548 { 00:10:18.548 "name": "BaseBdev3", 00:10:18.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.548 "is_configured": false, 00:10:18.548 "data_offset": 0, 00:10:18.548 "data_size": 0 00:10:18.548 }, 00:10:18.548 { 00:10:18.548 "name": "BaseBdev4", 00:10:18.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.548 "is_configured": false, 00:10:18.548 "data_offset": 0, 00:10:18.548 "data_size": 0 00:10:18.548 } 00:10:18.548 ] 00:10:18.548 }' 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.548 17:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.814 [2024-12-07 17:26:52.167602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.814 [2024-12-07 17:26:52.167772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.814 [2024-12-07 17:26:52.179612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.814 [2024-12-07 17:26:52.181741] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.814 [2024-12-07 17:26:52.181819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.814 [2024-12-07 17:26:52.181849] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.814 [2024-12-07 17:26:52.181873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.814 [2024-12-07 17:26:52.181891] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:18.814 [2024-12-07 17:26:52.181911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.814 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.074 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.074 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.074 "name": "Existed_Raid", 00:10:19.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.074 "strip_size_kb": 64, 00:10:19.074 "state": "configuring", 00:10:19.074 "raid_level": "raid0", 00:10:19.074 "superblock": false, 00:10:19.074 "num_base_bdevs": 4, 00:10:19.074 "num_base_bdevs_discovered": 1, 00:10:19.074 "num_base_bdevs_operational": 4, 00:10:19.074 "base_bdevs_list": [ 00:10:19.074 { 00:10:19.074 "name": "BaseBdev1", 00:10:19.074 "uuid": "aa5698f1-70de-4d56-afde-d47b652c6985", 00:10:19.074 "is_configured": true, 00:10:19.074 "data_offset": 0, 00:10:19.074 "data_size": 65536 00:10:19.074 }, 00:10:19.074 { 00:10:19.074 "name": "BaseBdev2", 00:10:19.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.074 "is_configured": false, 00:10:19.074 "data_offset": 0, 00:10:19.074 "data_size": 0 00:10:19.074 }, 00:10:19.074 { 00:10:19.074 "name": "BaseBdev3", 00:10:19.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.074 "is_configured": false, 00:10:19.074 "data_offset": 0, 00:10:19.074 "data_size": 0 00:10:19.074 }, 00:10:19.074 { 00:10:19.074 "name": "BaseBdev4", 00:10:19.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.074 "is_configured": false, 00:10:19.074 "data_offset": 0, 00:10:19.074 "data_size": 0 00:10:19.074 } 00:10:19.074 ] 00:10:19.074 }' 00:10:19.074 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.074 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.332 [2024-12-07 17:26:52.654842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.332 BaseBdev2 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.332 [ 00:10:19.332 { 00:10:19.332 "name": "BaseBdev2", 00:10:19.332 "aliases": [ 00:10:19.332 "cb6e5b1a-5df5-48db-a790-588f6ee45831" 00:10:19.332 ], 00:10:19.332 "product_name": "Malloc disk", 00:10:19.332 "block_size": 512, 00:10:19.332 "num_blocks": 65536, 00:10:19.332 "uuid": "cb6e5b1a-5df5-48db-a790-588f6ee45831", 00:10:19.332 "assigned_rate_limits": { 00:10:19.332 "rw_ios_per_sec": 0, 00:10:19.332 "rw_mbytes_per_sec": 0, 00:10:19.332 "r_mbytes_per_sec": 0, 00:10:19.332 "w_mbytes_per_sec": 0 00:10:19.332 }, 00:10:19.332 "claimed": true, 00:10:19.332 "claim_type": "exclusive_write", 00:10:19.332 "zoned": false, 00:10:19.332 "supported_io_types": { 00:10:19.332 "read": true, 00:10:19.332 "write": true, 00:10:19.332 "unmap": true, 00:10:19.332 "flush": true, 00:10:19.332 "reset": true, 00:10:19.332 "nvme_admin": false, 00:10:19.332 "nvme_io": false, 00:10:19.332 "nvme_io_md": false, 00:10:19.332 "write_zeroes": true, 00:10:19.332 "zcopy": true, 00:10:19.332 "get_zone_info": false, 00:10:19.332 "zone_management": false, 00:10:19.332 "zone_append": false, 00:10:19.332 "compare": false, 00:10:19.332 "compare_and_write": false, 00:10:19.332 "abort": true, 00:10:19.332 "seek_hole": false, 00:10:19.332 "seek_data": false, 00:10:19.332 "copy": true, 00:10:19.332 "nvme_iov_md": false 00:10:19.332 }, 00:10:19.332 "memory_domains": [ 00:10:19.332 { 00:10:19.332 "dma_device_id": "system", 00:10:19.332 "dma_device_type": 1 00:10:19.332 }, 00:10:19.332 { 00:10:19.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.332 "dma_device_type": 2 00:10:19.332 } 00:10:19.332 ], 00:10:19.332 "driver_specific": {} 00:10:19.332 } 00:10:19.332 ] 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.332 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.606 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.606 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.606 "name": "Existed_Raid", 00:10:19.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.606 "strip_size_kb": 64, 00:10:19.606 "state": "configuring", 00:10:19.606 "raid_level": "raid0", 00:10:19.606 "superblock": false, 00:10:19.606 "num_base_bdevs": 4, 00:10:19.606 "num_base_bdevs_discovered": 2, 00:10:19.606 "num_base_bdevs_operational": 4, 00:10:19.606 "base_bdevs_list": [ 00:10:19.606 { 00:10:19.606 "name": "BaseBdev1", 00:10:19.606 "uuid": "aa5698f1-70de-4d56-afde-d47b652c6985", 00:10:19.606 "is_configured": true, 00:10:19.606 "data_offset": 0, 00:10:19.606 "data_size": 65536 00:10:19.606 }, 00:10:19.606 { 00:10:19.606 "name": "BaseBdev2", 00:10:19.606 "uuid": "cb6e5b1a-5df5-48db-a790-588f6ee45831", 00:10:19.606 "is_configured": true, 00:10:19.606 "data_offset": 0, 00:10:19.606 "data_size": 65536 00:10:19.606 }, 00:10:19.606 { 00:10:19.606 "name": "BaseBdev3", 00:10:19.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.606 "is_configured": false, 00:10:19.606 "data_offset": 0, 00:10:19.606 "data_size": 0 00:10:19.606 }, 00:10:19.606 { 00:10:19.606 "name": "BaseBdev4", 00:10:19.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.606 "is_configured": false, 00:10:19.606 "data_offset": 0, 00:10:19.606 "data_size": 0 00:10:19.606 } 00:10:19.606 ] 00:10:19.606 }' 00:10:19.606 17:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.606 17:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.863 [2024-12-07 17:26:53.161592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.863 BaseBdev3 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.863 [ 00:10:19.863 { 00:10:19.863 "name": "BaseBdev3", 00:10:19.863 "aliases": [ 00:10:19.863 "287d21c7-ea90-454c-a2b6-e141b769b61f" 00:10:19.863 ], 00:10:19.863 "product_name": "Malloc disk", 00:10:19.863 "block_size": 512, 00:10:19.863 "num_blocks": 65536, 00:10:19.863 "uuid": "287d21c7-ea90-454c-a2b6-e141b769b61f", 00:10:19.863 "assigned_rate_limits": { 00:10:19.863 "rw_ios_per_sec": 0, 00:10:19.863 "rw_mbytes_per_sec": 0, 00:10:19.863 "r_mbytes_per_sec": 0, 00:10:19.863 "w_mbytes_per_sec": 0 00:10:19.863 }, 00:10:19.863 "claimed": true, 00:10:19.863 "claim_type": "exclusive_write", 00:10:19.863 "zoned": false, 00:10:19.863 "supported_io_types": { 00:10:19.863 "read": true, 00:10:19.863 "write": true, 00:10:19.863 "unmap": true, 00:10:19.863 "flush": true, 00:10:19.863 "reset": true, 00:10:19.863 "nvme_admin": false, 00:10:19.863 "nvme_io": false, 00:10:19.863 "nvme_io_md": false, 00:10:19.863 "write_zeroes": true, 00:10:19.863 "zcopy": true, 00:10:19.863 "get_zone_info": false, 00:10:19.863 "zone_management": false, 00:10:19.863 "zone_append": false, 00:10:19.863 "compare": false, 00:10:19.863 "compare_and_write": false, 00:10:19.863 "abort": true, 00:10:19.863 "seek_hole": false, 00:10:19.863 "seek_data": false, 00:10:19.863 "copy": true, 00:10:19.863 "nvme_iov_md": false 00:10:19.863 }, 00:10:19.863 "memory_domains": [ 00:10:19.863 { 00:10:19.863 "dma_device_id": "system", 00:10:19.863 "dma_device_type": 1 00:10:19.863 }, 00:10:19.863 { 00:10:19.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.863 "dma_device_type": 2 00:10:19.863 } 00:10:19.863 ], 00:10:19.863 "driver_specific": {} 00:10:19.863 } 00:10:19.863 ] 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.863 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.121 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.121 "name": "Existed_Raid", 00:10:20.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.121 "strip_size_kb": 64, 00:10:20.121 "state": "configuring", 00:10:20.121 "raid_level": "raid0", 00:10:20.121 "superblock": false, 00:10:20.121 "num_base_bdevs": 4, 00:10:20.121 "num_base_bdevs_discovered": 3, 00:10:20.121 "num_base_bdevs_operational": 4, 00:10:20.121 "base_bdevs_list": [ 00:10:20.121 { 00:10:20.121 "name": "BaseBdev1", 00:10:20.121 "uuid": "aa5698f1-70de-4d56-afde-d47b652c6985", 00:10:20.121 "is_configured": true, 00:10:20.121 "data_offset": 0, 00:10:20.121 "data_size": 65536 00:10:20.121 }, 00:10:20.121 { 00:10:20.121 "name": "BaseBdev2", 00:10:20.121 "uuid": "cb6e5b1a-5df5-48db-a790-588f6ee45831", 00:10:20.121 "is_configured": true, 00:10:20.121 "data_offset": 0, 00:10:20.121 "data_size": 65536 00:10:20.121 }, 00:10:20.121 { 00:10:20.121 "name": "BaseBdev3", 00:10:20.121 "uuid": "287d21c7-ea90-454c-a2b6-e141b769b61f", 00:10:20.121 "is_configured": true, 00:10:20.121 "data_offset": 0, 00:10:20.121 "data_size": 65536 00:10:20.121 }, 00:10:20.121 { 00:10:20.121 "name": "BaseBdev4", 00:10:20.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.121 "is_configured": false, 00:10:20.121 "data_offset": 0, 00:10:20.121 "data_size": 0 00:10:20.121 } 00:10:20.121 ] 00:10:20.121 }' 00:10:20.121 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.121 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.378 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.379 [2024-12-07 17:26:53.677345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.379 [2024-12-07 17:26:53.677478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.379 [2024-12-07 17:26:53.677506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:20.379 [2024-12-07 17:26:53.677863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:20.379 [2024-12-07 17:26:53.678101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.379 [2024-12-07 17:26:53.678145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:20.379 [2024-12-07 17:26:53.678480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.379 BaseBdev4 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.379 [ 00:10:20.379 { 00:10:20.379 "name": "BaseBdev4", 00:10:20.379 "aliases": [ 00:10:20.379 "c1f4f454-de79-4dcd-8606-fbe8b95ff132" 00:10:20.379 ], 00:10:20.379 "product_name": "Malloc disk", 00:10:20.379 "block_size": 512, 00:10:20.379 "num_blocks": 65536, 00:10:20.379 "uuid": "c1f4f454-de79-4dcd-8606-fbe8b95ff132", 00:10:20.379 "assigned_rate_limits": { 00:10:20.379 "rw_ios_per_sec": 0, 00:10:20.379 "rw_mbytes_per_sec": 0, 00:10:20.379 "r_mbytes_per_sec": 0, 00:10:20.379 "w_mbytes_per_sec": 0 00:10:20.379 }, 00:10:20.379 "claimed": true, 00:10:20.379 "claim_type": "exclusive_write", 00:10:20.379 "zoned": false, 00:10:20.379 "supported_io_types": { 00:10:20.379 "read": true, 00:10:20.379 "write": true, 00:10:20.379 "unmap": true, 00:10:20.379 "flush": true, 00:10:20.379 "reset": true, 00:10:20.379 "nvme_admin": false, 00:10:20.379 "nvme_io": false, 00:10:20.379 "nvme_io_md": false, 00:10:20.379 "write_zeroes": true, 00:10:20.379 "zcopy": true, 00:10:20.379 "get_zone_info": false, 00:10:20.379 "zone_management": false, 00:10:20.379 "zone_append": false, 00:10:20.379 "compare": false, 00:10:20.379 "compare_and_write": false, 00:10:20.379 "abort": true, 00:10:20.379 "seek_hole": false, 00:10:20.379 "seek_data": false, 00:10:20.379 "copy": true, 00:10:20.379 "nvme_iov_md": false 00:10:20.379 }, 00:10:20.379 "memory_domains": [ 00:10:20.379 { 00:10:20.379 "dma_device_id": "system", 00:10:20.379 "dma_device_type": 1 00:10:20.379 }, 00:10:20.379 { 00:10:20.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.379 "dma_device_type": 2 00:10:20.379 } 00:10:20.379 ], 00:10:20.379 "driver_specific": {} 00:10:20.379 } 00:10:20.379 ] 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.379 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.635 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.635 "name": "Existed_Raid", 00:10:20.635 "uuid": "09b8dab9-d222-471b-8d51-0b5cf97da81f", 00:10:20.635 "strip_size_kb": 64, 00:10:20.635 "state": "online", 00:10:20.635 "raid_level": "raid0", 00:10:20.635 "superblock": false, 00:10:20.635 "num_base_bdevs": 4, 00:10:20.635 "num_base_bdevs_discovered": 4, 00:10:20.635 "num_base_bdevs_operational": 4, 00:10:20.635 "base_bdevs_list": [ 00:10:20.635 { 00:10:20.635 "name": "BaseBdev1", 00:10:20.635 "uuid": "aa5698f1-70de-4d56-afde-d47b652c6985", 00:10:20.635 "is_configured": true, 00:10:20.635 "data_offset": 0, 00:10:20.635 "data_size": 65536 00:10:20.635 }, 00:10:20.635 { 00:10:20.635 "name": "BaseBdev2", 00:10:20.635 "uuid": "cb6e5b1a-5df5-48db-a790-588f6ee45831", 00:10:20.635 "is_configured": true, 00:10:20.635 "data_offset": 0, 00:10:20.635 "data_size": 65536 00:10:20.635 }, 00:10:20.635 { 00:10:20.635 "name": "BaseBdev3", 00:10:20.635 "uuid": "287d21c7-ea90-454c-a2b6-e141b769b61f", 00:10:20.635 "is_configured": true, 00:10:20.635 "data_offset": 0, 00:10:20.635 "data_size": 65536 00:10:20.635 }, 00:10:20.635 { 00:10:20.635 "name": "BaseBdev4", 00:10:20.635 "uuid": "c1f4f454-de79-4dcd-8606-fbe8b95ff132", 00:10:20.635 "is_configured": true, 00:10:20.635 "data_offset": 0, 00:10:20.635 "data_size": 65536 00:10:20.635 } 00:10:20.635 ] 00:10:20.635 }' 00:10:20.635 17:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.635 17:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.891 [2024-12-07 17:26:54.125126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.891 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.891 "name": "Existed_Raid", 00:10:20.891 "aliases": [ 00:10:20.891 "09b8dab9-d222-471b-8d51-0b5cf97da81f" 00:10:20.891 ], 00:10:20.891 "product_name": "Raid Volume", 00:10:20.891 "block_size": 512, 00:10:20.891 "num_blocks": 262144, 00:10:20.891 "uuid": "09b8dab9-d222-471b-8d51-0b5cf97da81f", 00:10:20.891 "assigned_rate_limits": { 00:10:20.891 "rw_ios_per_sec": 0, 00:10:20.891 "rw_mbytes_per_sec": 0, 00:10:20.891 "r_mbytes_per_sec": 0, 00:10:20.891 "w_mbytes_per_sec": 0 00:10:20.891 }, 00:10:20.891 "claimed": false, 00:10:20.891 "zoned": false, 00:10:20.891 "supported_io_types": { 00:10:20.891 "read": true, 00:10:20.891 "write": true, 00:10:20.891 "unmap": true, 00:10:20.891 "flush": true, 00:10:20.891 "reset": true, 00:10:20.891 "nvme_admin": false, 00:10:20.891 "nvme_io": false, 00:10:20.891 "nvme_io_md": false, 00:10:20.891 "write_zeroes": true, 00:10:20.891 "zcopy": false, 00:10:20.891 "get_zone_info": false, 00:10:20.891 "zone_management": false, 00:10:20.891 "zone_append": false, 00:10:20.891 "compare": false, 00:10:20.891 "compare_and_write": false, 00:10:20.891 "abort": false, 00:10:20.891 "seek_hole": false, 00:10:20.891 "seek_data": false, 00:10:20.891 "copy": false, 00:10:20.891 "nvme_iov_md": false 00:10:20.891 }, 00:10:20.891 "memory_domains": [ 00:10:20.891 { 00:10:20.891 "dma_device_id": "system", 00:10:20.891 "dma_device_type": 1 00:10:20.891 }, 00:10:20.891 { 00:10:20.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.891 "dma_device_type": 2 00:10:20.891 }, 00:10:20.891 { 00:10:20.891 "dma_device_id": "system", 00:10:20.892 "dma_device_type": 1 00:10:20.892 }, 00:10:20.892 { 00:10:20.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.892 "dma_device_type": 2 00:10:20.892 }, 00:10:20.892 { 00:10:20.892 "dma_device_id": "system", 00:10:20.892 "dma_device_type": 1 00:10:20.892 }, 00:10:20.892 { 00:10:20.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.892 "dma_device_type": 2 00:10:20.892 }, 00:10:20.892 { 00:10:20.892 "dma_device_id": "system", 00:10:20.892 "dma_device_type": 1 00:10:20.892 }, 00:10:20.892 { 00:10:20.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.892 "dma_device_type": 2 00:10:20.892 } 00:10:20.892 ], 00:10:20.892 "driver_specific": { 00:10:20.892 "raid": { 00:10:20.892 "uuid": "09b8dab9-d222-471b-8d51-0b5cf97da81f", 00:10:20.892 "strip_size_kb": 64, 00:10:20.892 "state": "online", 00:10:20.892 "raid_level": "raid0", 00:10:20.892 "superblock": false, 00:10:20.892 "num_base_bdevs": 4, 00:10:20.892 "num_base_bdevs_discovered": 4, 00:10:20.892 "num_base_bdevs_operational": 4, 00:10:20.892 "base_bdevs_list": [ 00:10:20.892 { 00:10:20.892 "name": "BaseBdev1", 00:10:20.892 "uuid": "aa5698f1-70de-4d56-afde-d47b652c6985", 00:10:20.892 "is_configured": true, 00:10:20.892 "data_offset": 0, 00:10:20.892 "data_size": 65536 00:10:20.892 }, 00:10:20.892 { 00:10:20.892 "name": "BaseBdev2", 00:10:20.892 "uuid": "cb6e5b1a-5df5-48db-a790-588f6ee45831", 00:10:20.892 "is_configured": true, 00:10:20.892 "data_offset": 0, 00:10:20.892 "data_size": 65536 00:10:20.892 }, 00:10:20.892 { 00:10:20.892 "name": "BaseBdev3", 00:10:20.892 "uuid": "287d21c7-ea90-454c-a2b6-e141b769b61f", 00:10:20.892 "is_configured": true, 00:10:20.892 "data_offset": 0, 00:10:20.892 "data_size": 65536 00:10:20.892 }, 00:10:20.892 { 00:10:20.892 "name": "BaseBdev4", 00:10:20.892 "uuid": "c1f4f454-de79-4dcd-8606-fbe8b95ff132", 00:10:20.892 "is_configured": true, 00:10:20.892 "data_offset": 0, 00:10:20.892 "data_size": 65536 00:10:20.892 } 00:10:20.892 ] 00:10:20.892 } 00:10:20.892 } 00:10:20.892 }' 00:10:20.892 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.892 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:20.892 BaseBdev2 00:10:20.892 BaseBdev3 00:10:20.892 BaseBdev4' 00:10:20.892 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.892 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.892 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.892 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:20.892 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.892 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.892 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.892 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.148 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.148 [2024-12-07 17:26:54.440167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.148 [2024-12-07 17:26:54.440280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.148 [2024-12-07 17:26:54.440344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.406 "name": "Existed_Raid", 00:10:21.406 "uuid": "09b8dab9-d222-471b-8d51-0b5cf97da81f", 00:10:21.406 "strip_size_kb": 64, 00:10:21.406 "state": "offline", 00:10:21.406 "raid_level": "raid0", 00:10:21.406 "superblock": false, 00:10:21.406 "num_base_bdevs": 4, 00:10:21.406 "num_base_bdevs_discovered": 3, 00:10:21.406 "num_base_bdevs_operational": 3, 00:10:21.406 "base_bdevs_list": [ 00:10:21.406 { 00:10:21.406 "name": null, 00:10:21.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.406 "is_configured": false, 00:10:21.406 "data_offset": 0, 00:10:21.406 "data_size": 65536 00:10:21.406 }, 00:10:21.406 { 00:10:21.406 "name": "BaseBdev2", 00:10:21.406 "uuid": "cb6e5b1a-5df5-48db-a790-588f6ee45831", 00:10:21.406 "is_configured": true, 00:10:21.406 "data_offset": 0, 00:10:21.406 "data_size": 65536 00:10:21.406 }, 00:10:21.406 { 00:10:21.406 "name": "BaseBdev3", 00:10:21.406 "uuid": "287d21c7-ea90-454c-a2b6-e141b769b61f", 00:10:21.406 "is_configured": true, 00:10:21.406 "data_offset": 0, 00:10:21.406 "data_size": 65536 00:10:21.406 }, 00:10:21.406 { 00:10:21.406 "name": "BaseBdev4", 00:10:21.406 "uuid": "c1f4f454-de79-4dcd-8606-fbe8b95ff132", 00:10:21.406 "is_configured": true, 00:10:21.406 "data_offset": 0, 00:10:21.406 "data_size": 65536 00:10:21.406 } 00:10:21.406 ] 00:10:21.406 }' 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.406 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.714 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:21.714 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.714 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.714 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.714 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.714 17:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.714 17:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.714 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.714 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.714 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:21.714 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.714 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.714 [2024-12-07 17:26:55.017174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.971 [2024-12-07 17:26:55.178146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.971 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.971 [2024-12-07 17:26:55.339082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:21.971 [2024-12-07 17:26:55.339220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:22.229 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.229 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:22.229 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.230 BaseBdev2 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.230 [ 00:10:22.230 { 00:10:22.230 "name": "BaseBdev2", 00:10:22.230 "aliases": [ 00:10:22.230 "7470b670-6caa-4ce3-886f-81382c76d2d9" 00:10:22.230 ], 00:10:22.230 "product_name": "Malloc disk", 00:10:22.230 "block_size": 512, 00:10:22.230 "num_blocks": 65536, 00:10:22.230 "uuid": "7470b670-6caa-4ce3-886f-81382c76d2d9", 00:10:22.230 "assigned_rate_limits": { 00:10:22.230 "rw_ios_per_sec": 0, 00:10:22.230 "rw_mbytes_per_sec": 0, 00:10:22.230 "r_mbytes_per_sec": 0, 00:10:22.230 "w_mbytes_per_sec": 0 00:10:22.230 }, 00:10:22.230 "claimed": false, 00:10:22.230 "zoned": false, 00:10:22.230 "supported_io_types": { 00:10:22.230 "read": true, 00:10:22.230 "write": true, 00:10:22.230 "unmap": true, 00:10:22.230 "flush": true, 00:10:22.230 "reset": true, 00:10:22.230 "nvme_admin": false, 00:10:22.230 "nvme_io": false, 00:10:22.230 "nvme_io_md": false, 00:10:22.230 "write_zeroes": true, 00:10:22.230 "zcopy": true, 00:10:22.230 "get_zone_info": false, 00:10:22.230 "zone_management": false, 00:10:22.230 "zone_append": false, 00:10:22.230 "compare": false, 00:10:22.230 "compare_and_write": false, 00:10:22.230 "abort": true, 00:10:22.230 "seek_hole": false, 00:10:22.230 "seek_data": false, 00:10:22.230 "copy": true, 00:10:22.230 "nvme_iov_md": false 00:10:22.230 }, 00:10:22.230 "memory_domains": [ 00:10:22.230 { 00:10:22.230 "dma_device_id": "system", 00:10:22.230 "dma_device_type": 1 00:10:22.230 }, 00:10:22.230 { 00:10:22.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.230 "dma_device_type": 2 00:10:22.230 } 00:10:22.230 ], 00:10:22.230 "driver_specific": {} 00:10:22.230 } 00:10:22.230 ] 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.230 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.488 BaseBdev3 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.488 [ 00:10:22.488 { 00:10:22.488 "name": "BaseBdev3", 00:10:22.488 "aliases": [ 00:10:22.488 "e0ae3d0c-f912-4454-935e-6be8f16fabfc" 00:10:22.488 ], 00:10:22.488 "product_name": "Malloc disk", 00:10:22.488 "block_size": 512, 00:10:22.488 "num_blocks": 65536, 00:10:22.488 "uuid": "e0ae3d0c-f912-4454-935e-6be8f16fabfc", 00:10:22.488 "assigned_rate_limits": { 00:10:22.488 "rw_ios_per_sec": 0, 00:10:22.488 "rw_mbytes_per_sec": 0, 00:10:22.488 "r_mbytes_per_sec": 0, 00:10:22.488 "w_mbytes_per_sec": 0 00:10:22.488 }, 00:10:22.488 "claimed": false, 00:10:22.488 "zoned": false, 00:10:22.488 "supported_io_types": { 00:10:22.488 "read": true, 00:10:22.488 "write": true, 00:10:22.488 "unmap": true, 00:10:22.488 "flush": true, 00:10:22.488 "reset": true, 00:10:22.488 "nvme_admin": false, 00:10:22.488 "nvme_io": false, 00:10:22.488 "nvme_io_md": false, 00:10:22.488 "write_zeroes": true, 00:10:22.488 "zcopy": true, 00:10:22.488 "get_zone_info": false, 00:10:22.488 "zone_management": false, 00:10:22.488 "zone_append": false, 00:10:22.488 "compare": false, 00:10:22.488 "compare_and_write": false, 00:10:22.488 "abort": true, 00:10:22.488 "seek_hole": false, 00:10:22.488 "seek_data": false, 00:10:22.488 "copy": true, 00:10:22.488 "nvme_iov_md": false 00:10:22.488 }, 00:10:22.488 "memory_domains": [ 00:10:22.488 { 00:10:22.488 "dma_device_id": "system", 00:10:22.488 "dma_device_type": 1 00:10:22.488 }, 00:10:22.488 { 00:10:22.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.488 "dma_device_type": 2 00:10:22.488 } 00:10:22.488 ], 00:10:22.488 "driver_specific": {} 00:10:22.488 } 00:10:22.488 ] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.488 BaseBdev4 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.488 [ 00:10:22.488 { 00:10:22.488 "name": "BaseBdev4", 00:10:22.488 "aliases": [ 00:10:22.488 "329b3418-3370-4f7f-85bf-2b809718dfe4" 00:10:22.488 ], 00:10:22.488 "product_name": "Malloc disk", 00:10:22.488 "block_size": 512, 00:10:22.488 "num_blocks": 65536, 00:10:22.488 "uuid": "329b3418-3370-4f7f-85bf-2b809718dfe4", 00:10:22.488 "assigned_rate_limits": { 00:10:22.488 "rw_ios_per_sec": 0, 00:10:22.488 "rw_mbytes_per_sec": 0, 00:10:22.488 "r_mbytes_per_sec": 0, 00:10:22.488 "w_mbytes_per_sec": 0 00:10:22.488 }, 00:10:22.488 "claimed": false, 00:10:22.488 "zoned": false, 00:10:22.488 "supported_io_types": { 00:10:22.488 "read": true, 00:10:22.488 "write": true, 00:10:22.488 "unmap": true, 00:10:22.488 "flush": true, 00:10:22.488 "reset": true, 00:10:22.488 "nvme_admin": false, 00:10:22.488 "nvme_io": false, 00:10:22.488 "nvme_io_md": false, 00:10:22.488 "write_zeroes": true, 00:10:22.488 "zcopy": true, 00:10:22.488 "get_zone_info": false, 00:10:22.488 "zone_management": false, 00:10:22.488 "zone_append": false, 00:10:22.488 "compare": false, 00:10:22.488 "compare_and_write": false, 00:10:22.488 "abort": true, 00:10:22.488 "seek_hole": false, 00:10:22.488 "seek_data": false, 00:10:22.488 "copy": true, 00:10:22.488 "nvme_iov_md": false 00:10:22.488 }, 00:10:22.488 "memory_domains": [ 00:10:22.488 { 00:10:22.488 "dma_device_id": "system", 00:10:22.488 "dma_device_type": 1 00:10:22.488 }, 00:10:22.488 { 00:10:22.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.488 "dma_device_type": 2 00:10:22.488 } 00:10:22.488 ], 00:10:22.488 "driver_specific": {} 00:10:22.488 } 00:10:22.488 ] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.488 [2024-12-07 17:26:55.753769] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.488 [2024-12-07 17:26:55.753901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.488 [2024-12-07 17:26:55.753956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.488 [2024-12-07 17:26:55.756074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.488 [2024-12-07 17:26:55.756169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.488 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.489 "name": "Existed_Raid", 00:10:22.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.489 "strip_size_kb": 64, 00:10:22.489 "state": "configuring", 00:10:22.489 "raid_level": "raid0", 00:10:22.489 "superblock": false, 00:10:22.489 "num_base_bdevs": 4, 00:10:22.489 "num_base_bdevs_discovered": 3, 00:10:22.489 "num_base_bdevs_operational": 4, 00:10:22.489 "base_bdevs_list": [ 00:10:22.489 { 00:10:22.489 "name": "BaseBdev1", 00:10:22.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.489 "is_configured": false, 00:10:22.489 "data_offset": 0, 00:10:22.489 "data_size": 0 00:10:22.489 }, 00:10:22.489 { 00:10:22.489 "name": "BaseBdev2", 00:10:22.489 "uuid": "7470b670-6caa-4ce3-886f-81382c76d2d9", 00:10:22.489 "is_configured": true, 00:10:22.489 "data_offset": 0, 00:10:22.489 "data_size": 65536 00:10:22.489 }, 00:10:22.489 { 00:10:22.489 "name": "BaseBdev3", 00:10:22.489 "uuid": "e0ae3d0c-f912-4454-935e-6be8f16fabfc", 00:10:22.489 "is_configured": true, 00:10:22.489 "data_offset": 0, 00:10:22.489 "data_size": 65536 00:10:22.489 }, 00:10:22.489 { 00:10:22.489 "name": "BaseBdev4", 00:10:22.489 "uuid": "329b3418-3370-4f7f-85bf-2b809718dfe4", 00:10:22.489 "is_configured": true, 00:10:22.489 "data_offset": 0, 00:10:22.489 "data_size": 65536 00:10:22.489 } 00:10:22.489 ] 00:10:22.489 }' 00:10:22.489 17:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.489 17:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.053 [2024-12-07 17:26:56.221038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.053 "name": "Existed_Raid", 00:10:23.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.053 "strip_size_kb": 64, 00:10:23.053 "state": "configuring", 00:10:23.053 "raid_level": "raid0", 00:10:23.053 "superblock": false, 00:10:23.053 "num_base_bdevs": 4, 00:10:23.053 "num_base_bdevs_discovered": 2, 00:10:23.053 "num_base_bdevs_operational": 4, 00:10:23.053 "base_bdevs_list": [ 00:10:23.053 { 00:10:23.053 "name": "BaseBdev1", 00:10:23.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.053 "is_configured": false, 00:10:23.053 "data_offset": 0, 00:10:23.053 "data_size": 0 00:10:23.053 }, 00:10:23.053 { 00:10:23.053 "name": null, 00:10:23.053 "uuid": "7470b670-6caa-4ce3-886f-81382c76d2d9", 00:10:23.053 "is_configured": false, 00:10:23.053 "data_offset": 0, 00:10:23.053 "data_size": 65536 00:10:23.053 }, 00:10:23.053 { 00:10:23.053 "name": "BaseBdev3", 00:10:23.053 "uuid": "e0ae3d0c-f912-4454-935e-6be8f16fabfc", 00:10:23.053 "is_configured": true, 00:10:23.053 "data_offset": 0, 00:10:23.053 "data_size": 65536 00:10:23.053 }, 00:10:23.053 { 00:10:23.053 "name": "BaseBdev4", 00:10:23.053 "uuid": "329b3418-3370-4f7f-85bf-2b809718dfe4", 00:10:23.053 "is_configured": true, 00:10:23.053 "data_offset": 0, 00:10:23.053 "data_size": 65536 00:10:23.053 } 00:10:23.053 ] 00:10:23.053 }' 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.053 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.311 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:23.311 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.311 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.311 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.311 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.311 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:23.311 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.311 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.311 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.570 [2024-12-07 17:26:56.722536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.570 BaseBdev1 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.570 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.570 [ 00:10:23.570 { 00:10:23.570 "name": "BaseBdev1", 00:10:23.570 "aliases": [ 00:10:23.570 "1fce237f-5194-4b07-822f-aff2d3bbc810" 00:10:23.570 ], 00:10:23.570 "product_name": "Malloc disk", 00:10:23.570 "block_size": 512, 00:10:23.570 "num_blocks": 65536, 00:10:23.570 "uuid": "1fce237f-5194-4b07-822f-aff2d3bbc810", 00:10:23.570 "assigned_rate_limits": { 00:10:23.570 "rw_ios_per_sec": 0, 00:10:23.570 "rw_mbytes_per_sec": 0, 00:10:23.570 "r_mbytes_per_sec": 0, 00:10:23.570 "w_mbytes_per_sec": 0 00:10:23.570 }, 00:10:23.570 "claimed": true, 00:10:23.570 "claim_type": "exclusive_write", 00:10:23.570 "zoned": false, 00:10:23.570 "supported_io_types": { 00:10:23.570 "read": true, 00:10:23.570 "write": true, 00:10:23.570 "unmap": true, 00:10:23.571 "flush": true, 00:10:23.571 "reset": true, 00:10:23.571 "nvme_admin": false, 00:10:23.571 "nvme_io": false, 00:10:23.571 "nvme_io_md": false, 00:10:23.571 "write_zeroes": true, 00:10:23.571 "zcopy": true, 00:10:23.571 "get_zone_info": false, 00:10:23.571 "zone_management": false, 00:10:23.571 "zone_append": false, 00:10:23.571 "compare": false, 00:10:23.571 "compare_and_write": false, 00:10:23.571 "abort": true, 00:10:23.571 "seek_hole": false, 00:10:23.571 "seek_data": false, 00:10:23.571 "copy": true, 00:10:23.571 "nvme_iov_md": false 00:10:23.571 }, 00:10:23.571 "memory_domains": [ 00:10:23.571 { 00:10:23.571 "dma_device_id": "system", 00:10:23.571 "dma_device_type": 1 00:10:23.571 }, 00:10:23.571 { 00:10:23.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.571 "dma_device_type": 2 00:10:23.571 } 00:10:23.571 ], 00:10:23.571 "driver_specific": {} 00:10:23.571 } 00:10:23.571 ] 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.571 "name": "Existed_Raid", 00:10:23.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.571 "strip_size_kb": 64, 00:10:23.571 "state": "configuring", 00:10:23.571 "raid_level": "raid0", 00:10:23.571 "superblock": false, 00:10:23.571 "num_base_bdevs": 4, 00:10:23.571 "num_base_bdevs_discovered": 3, 00:10:23.571 "num_base_bdevs_operational": 4, 00:10:23.571 "base_bdevs_list": [ 00:10:23.571 { 00:10:23.571 "name": "BaseBdev1", 00:10:23.571 "uuid": "1fce237f-5194-4b07-822f-aff2d3bbc810", 00:10:23.571 "is_configured": true, 00:10:23.571 "data_offset": 0, 00:10:23.571 "data_size": 65536 00:10:23.571 }, 00:10:23.571 { 00:10:23.571 "name": null, 00:10:23.571 "uuid": "7470b670-6caa-4ce3-886f-81382c76d2d9", 00:10:23.571 "is_configured": false, 00:10:23.571 "data_offset": 0, 00:10:23.571 "data_size": 65536 00:10:23.571 }, 00:10:23.571 { 00:10:23.571 "name": "BaseBdev3", 00:10:23.571 "uuid": "e0ae3d0c-f912-4454-935e-6be8f16fabfc", 00:10:23.571 "is_configured": true, 00:10:23.571 "data_offset": 0, 00:10:23.571 "data_size": 65536 00:10:23.571 }, 00:10:23.571 { 00:10:23.571 "name": "BaseBdev4", 00:10:23.571 "uuid": "329b3418-3370-4f7f-85bf-2b809718dfe4", 00:10:23.571 "is_configured": true, 00:10:23.571 "data_offset": 0, 00:10:23.571 "data_size": 65536 00:10:23.571 } 00:10:23.571 ] 00:10:23.571 }' 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.571 17:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.139 [2024-12-07 17:26:57.269697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.139 "name": "Existed_Raid", 00:10:24.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.139 "strip_size_kb": 64, 00:10:24.139 "state": "configuring", 00:10:24.139 "raid_level": "raid0", 00:10:24.139 "superblock": false, 00:10:24.139 "num_base_bdevs": 4, 00:10:24.139 "num_base_bdevs_discovered": 2, 00:10:24.139 "num_base_bdevs_operational": 4, 00:10:24.139 "base_bdevs_list": [ 00:10:24.139 { 00:10:24.139 "name": "BaseBdev1", 00:10:24.139 "uuid": "1fce237f-5194-4b07-822f-aff2d3bbc810", 00:10:24.139 "is_configured": true, 00:10:24.139 "data_offset": 0, 00:10:24.139 "data_size": 65536 00:10:24.139 }, 00:10:24.139 { 00:10:24.139 "name": null, 00:10:24.139 "uuid": "7470b670-6caa-4ce3-886f-81382c76d2d9", 00:10:24.139 "is_configured": false, 00:10:24.139 "data_offset": 0, 00:10:24.139 "data_size": 65536 00:10:24.139 }, 00:10:24.139 { 00:10:24.139 "name": null, 00:10:24.139 "uuid": "e0ae3d0c-f912-4454-935e-6be8f16fabfc", 00:10:24.139 "is_configured": false, 00:10:24.139 "data_offset": 0, 00:10:24.139 "data_size": 65536 00:10:24.139 }, 00:10:24.139 { 00:10:24.139 "name": "BaseBdev4", 00:10:24.139 "uuid": "329b3418-3370-4f7f-85bf-2b809718dfe4", 00:10:24.139 "is_configured": true, 00:10:24.139 "data_offset": 0, 00:10:24.139 "data_size": 65536 00:10:24.139 } 00:10:24.139 ] 00:10:24.139 }' 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.139 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.398 [2024-12-07 17:26:57.720959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.398 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.657 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.657 "name": "Existed_Raid", 00:10:24.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.657 "strip_size_kb": 64, 00:10:24.657 "state": "configuring", 00:10:24.657 "raid_level": "raid0", 00:10:24.657 "superblock": false, 00:10:24.657 "num_base_bdevs": 4, 00:10:24.657 "num_base_bdevs_discovered": 3, 00:10:24.657 "num_base_bdevs_operational": 4, 00:10:24.657 "base_bdevs_list": [ 00:10:24.657 { 00:10:24.657 "name": "BaseBdev1", 00:10:24.657 "uuid": "1fce237f-5194-4b07-822f-aff2d3bbc810", 00:10:24.657 "is_configured": true, 00:10:24.657 "data_offset": 0, 00:10:24.657 "data_size": 65536 00:10:24.657 }, 00:10:24.657 { 00:10:24.657 "name": null, 00:10:24.657 "uuid": "7470b670-6caa-4ce3-886f-81382c76d2d9", 00:10:24.657 "is_configured": false, 00:10:24.657 "data_offset": 0, 00:10:24.657 "data_size": 65536 00:10:24.657 }, 00:10:24.657 { 00:10:24.657 "name": "BaseBdev3", 00:10:24.657 "uuid": "e0ae3d0c-f912-4454-935e-6be8f16fabfc", 00:10:24.657 "is_configured": true, 00:10:24.657 "data_offset": 0, 00:10:24.657 "data_size": 65536 00:10:24.657 }, 00:10:24.657 { 00:10:24.657 "name": "BaseBdev4", 00:10:24.657 "uuid": "329b3418-3370-4f7f-85bf-2b809718dfe4", 00:10:24.657 "is_configured": true, 00:10:24.657 "data_offset": 0, 00:10:24.657 "data_size": 65536 00:10:24.657 } 00:10:24.657 ] 00:10:24.657 }' 00:10:24.657 17:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.657 17:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.917 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:24.917 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.917 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.917 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.917 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.917 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:24.917 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:24.917 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.917 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.917 [2024-12-07 17:26:58.224205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.176 "name": "Existed_Raid", 00:10:25.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.176 "strip_size_kb": 64, 00:10:25.176 "state": "configuring", 00:10:25.176 "raid_level": "raid0", 00:10:25.176 "superblock": false, 00:10:25.176 "num_base_bdevs": 4, 00:10:25.176 "num_base_bdevs_discovered": 2, 00:10:25.176 "num_base_bdevs_operational": 4, 00:10:25.176 "base_bdevs_list": [ 00:10:25.176 { 00:10:25.176 "name": null, 00:10:25.176 "uuid": "1fce237f-5194-4b07-822f-aff2d3bbc810", 00:10:25.176 "is_configured": false, 00:10:25.176 "data_offset": 0, 00:10:25.176 "data_size": 65536 00:10:25.176 }, 00:10:25.176 { 00:10:25.176 "name": null, 00:10:25.176 "uuid": "7470b670-6caa-4ce3-886f-81382c76d2d9", 00:10:25.176 "is_configured": false, 00:10:25.176 "data_offset": 0, 00:10:25.176 "data_size": 65536 00:10:25.176 }, 00:10:25.176 { 00:10:25.176 "name": "BaseBdev3", 00:10:25.176 "uuid": "e0ae3d0c-f912-4454-935e-6be8f16fabfc", 00:10:25.176 "is_configured": true, 00:10:25.176 "data_offset": 0, 00:10:25.176 "data_size": 65536 00:10:25.176 }, 00:10:25.176 { 00:10:25.176 "name": "BaseBdev4", 00:10:25.176 "uuid": "329b3418-3370-4f7f-85bf-2b809718dfe4", 00:10:25.176 "is_configured": true, 00:10:25.176 "data_offset": 0, 00:10:25.176 "data_size": 65536 00:10:25.176 } 00:10:25.176 ] 00:10:25.176 }' 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.176 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.435 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:25.435 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.435 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.435 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.435 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.435 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:25.435 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:25.435 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.435 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.435 [2024-12-07 17:26:58.813192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.694 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.694 "name": "Existed_Raid", 00:10:25.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.695 "strip_size_kb": 64, 00:10:25.695 "state": "configuring", 00:10:25.695 "raid_level": "raid0", 00:10:25.695 "superblock": false, 00:10:25.695 "num_base_bdevs": 4, 00:10:25.695 "num_base_bdevs_discovered": 3, 00:10:25.695 "num_base_bdevs_operational": 4, 00:10:25.695 "base_bdevs_list": [ 00:10:25.695 { 00:10:25.695 "name": null, 00:10:25.695 "uuid": "1fce237f-5194-4b07-822f-aff2d3bbc810", 00:10:25.695 "is_configured": false, 00:10:25.695 "data_offset": 0, 00:10:25.695 "data_size": 65536 00:10:25.695 }, 00:10:25.695 { 00:10:25.695 "name": "BaseBdev2", 00:10:25.695 "uuid": "7470b670-6caa-4ce3-886f-81382c76d2d9", 00:10:25.695 "is_configured": true, 00:10:25.695 "data_offset": 0, 00:10:25.695 "data_size": 65536 00:10:25.695 }, 00:10:25.695 { 00:10:25.695 "name": "BaseBdev3", 00:10:25.695 "uuid": "e0ae3d0c-f912-4454-935e-6be8f16fabfc", 00:10:25.695 "is_configured": true, 00:10:25.695 "data_offset": 0, 00:10:25.695 "data_size": 65536 00:10:25.695 }, 00:10:25.695 { 00:10:25.695 "name": "BaseBdev4", 00:10:25.695 "uuid": "329b3418-3370-4f7f-85bf-2b809718dfe4", 00:10:25.695 "is_configured": true, 00:10:25.695 "data_offset": 0, 00:10:25.695 "data_size": 65536 00:10:25.695 } 00:10:25.695 ] 00:10:25.695 }' 00:10:25.695 17:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.695 17:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.953 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.953 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:25.953 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.953 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.953 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.953 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:25.954 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.954 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:25.954 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.954 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.954 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.954 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1fce237f-5194-4b07-822f-aff2d3bbc810 00:10:25.954 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.954 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.213 [2024-12-07 17:26:59.345433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:26.213 [2024-12-07 17:26:59.345573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:26.213 [2024-12-07 17:26:59.345599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:26.213 [2024-12-07 17:26:59.345927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:26.213 [2024-12-07 17:26:59.346142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:26.213 [2024-12-07 17:26:59.346182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:26.213 [2024-12-07 17:26:59.346489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.213 NewBaseBdev 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.213 [ 00:10:26.213 { 00:10:26.213 "name": "NewBaseBdev", 00:10:26.213 "aliases": [ 00:10:26.213 "1fce237f-5194-4b07-822f-aff2d3bbc810" 00:10:26.213 ], 00:10:26.213 "product_name": "Malloc disk", 00:10:26.213 "block_size": 512, 00:10:26.213 "num_blocks": 65536, 00:10:26.213 "uuid": "1fce237f-5194-4b07-822f-aff2d3bbc810", 00:10:26.213 "assigned_rate_limits": { 00:10:26.213 "rw_ios_per_sec": 0, 00:10:26.213 "rw_mbytes_per_sec": 0, 00:10:26.213 "r_mbytes_per_sec": 0, 00:10:26.213 "w_mbytes_per_sec": 0 00:10:26.213 }, 00:10:26.213 "claimed": true, 00:10:26.213 "claim_type": "exclusive_write", 00:10:26.213 "zoned": false, 00:10:26.213 "supported_io_types": { 00:10:26.213 "read": true, 00:10:26.213 "write": true, 00:10:26.213 "unmap": true, 00:10:26.213 "flush": true, 00:10:26.213 "reset": true, 00:10:26.213 "nvme_admin": false, 00:10:26.213 "nvme_io": false, 00:10:26.213 "nvme_io_md": false, 00:10:26.213 "write_zeroes": true, 00:10:26.213 "zcopy": true, 00:10:26.213 "get_zone_info": false, 00:10:26.213 "zone_management": false, 00:10:26.213 "zone_append": false, 00:10:26.213 "compare": false, 00:10:26.213 "compare_and_write": false, 00:10:26.213 "abort": true, 00:10:26.213 "seek_hole": false, 00:10:26.213 "seek_data": false, 00:10:26.213 "copy": true, 00:10:26.213 "nvme_iov_md": false 00:10:26.213 }, 00:10:26.213 "memory_domains": [ 00:10:26.213 { 00:10:26.213 "dma_device_id": "system", 00:10:26.213 "dma_device_type": 1 00:10:26.213 }, 00:10:26.213 { 00:10:26.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.213 "dma_device_type": 2 00:10:26.213 } 00:10:26.213 ], 00:10:26.213 "driver_specific": {} 00:10:26.213 } 00:10:26.213 ] 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.213 "name": "Existed_Raid", 00:10:26.213 "uuid": "a39c9974-39de-4e07-9984-3c5f11833e10", 00:10:26.213 "strip_size_kb": 64, 00:10:26.213 "state": "online", 00:10:26.213 "raid_level": "raid0", 00:10:26.213 "superblock": false, 00:10:26.213 "num_base_bdevs": 4, 00:10:26.213 "num_base_bdevs_discovered": 4, 00:10:26.213 "num_base_bdevs_operational": 4, 00:10:26.213 "base_bdevs_list": [ 00:10:26.213 { 00:10:26.213 "name": "NewBaseBdev", 00:10:26.213 "uuid": "1fce237f-5194-4b07-822f-aff2d3bbc810", 00:10:26.213 "is_configured": true, 00:10:26.213 "data_offset": 0, 00:10:26.213 "data_size": 65536 00:10:26.213 }, 00:10:26.213 { 00:10:26.213 "name": "BaseBdev2", 00:10:26.213 "uuid": "7470b670-6caa-4ce3-886f-81382c76d2d9", 00:10:26.213 "is_configured": true, 00:10:26.213 "data_offset": 0, 00:10:26.213 "data_size": 65536 00:10:26.213 }, 00:10:26.213 { 00:10:26.213 "name": "BaseBdev3", 00:10:26.213 "uuid": "e0ae3d0c-f912-4454-935e-6be8f16fabfc", 00:10:26.213 "is_configured": true, 00:10:26.213 "data_offset": 0, 00:10:26.213 "data_size": 65536 00:10:26.213 }, 00:10:26.213 { 00:10:26.213 "name": "BaseBdev4", 00:10:26.213 "uuid": "329b3418-3370-4f7f-85bf-2b809718dfe4", 00:10:26.213 "is_configured": true, 00:10:26.213 "data_offset": 0, 00:10:26.213 "data_size": 65536 00:10:26.213 } 00:10:26.213 ] 00:10:26.213 }' 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.213 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.472 [2024-12-07 17:26:59.741259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.472 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.472 "name": "Existed_Raid", 00:10:26.472 "aliases": [ 00:10:26.472 "a39c9974-39de-4e07-9984-3c5f11833e10" 00:10:26.472 ], 00:10:26.472 "product_name": "Raid Volume", 00:10:26.472 "block_size": 512, 00:10:26.472 "num_blocks": 262144, 00:10:26.472 "uuid": "a39c9974-39de-4e07-9984-3c5f11833e10", 00:10:26.472 "assigned_rate_limits": { 00:10:26.472 "rw_ios_per_sec": 0, 00:10:26.472 "rw_mbytes_per_sec": 0, 00:10:26.472 "r_mbytes_per_sec": 0, 00:10:26.472 "w_mbytes_per_sec": 0 00:10:26.472 }, 00:10:26.472 "claimed": false, 00:10:26.472 "zoned": false, 00:10:26.472 "supported_io_types": { 00:10:26.472 "read": true, 00:10:26.472 "write": true, 00:10:26.472 "unmap": true, 00:10:26.472 "flush": true, 00:10:26.472 "reset": true, 00:10:26.472 "nvme_admin": false, 00:10:26.472 "nvme_io": false, 00:10:26.472 "nvme_io_md": false, 00:10:26.472 "write_zeroes": true, 00:10:26.472 "zcopy": false, 00:10:26.472 "get_zone_info": false, 00:10:26.472 "zone_management": false, 00:10:26.472 "zone_append": false, 00:10:26.472 "compare": false, 00:10:26.472 "compare_and_write": false, 00:10:26.472 "abort": false, 00:10:26.472 "seek_hole": false, 00:10:26.472 "seek_data": false, 00:10:26.472 "copy": false, 00:10:26.472 "nvme_iov_md": false 00:10:26.472 }, 00:10:26.472 "memory_domains": [ 00:10:26.472 { 00:10:26.472 "dma_device_id": "system", 00:10:26.472 "dma_device_type": 1 00:10:26.472 }, 00:10:26.472 { 00:10:26.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.473 "dma_device_type": 2 00:10:26.473 }, 00:10:26.473 { 00:10:26.473 "dma_device_id": "system", 00:10:26.473 "dma_device_type": 1 00:10:26.473 }, 00:10:26.473 { 00:10:26.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.473 "dma_device_type": 2 00:10:26.473 }, 00:10:26.473 { 00:10:26.473 "dma_device_id": "system", 00:10:26.473 "dma_device_type": 1 00:10:26.473 }, 00:10:26.473 { 00:10:26.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.473 "dma_device_type": 2 00:10:26.473 }, 00:10:26.473 { 00:10:26.473 "dma_device_id": "system", 00:10:26.473 "dma_device_type": 1 00:10:26.473 }, 00:10:26.473 { 00:10:26.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.473 "dma_device_type": 2 00:10:26.473 } 00:10:26.473 ], 00:10:26.473 "driver_specific": { 00:10:26.473 "raid": { 00:10:26.473 "uuid": "a39c9974-39de-4e07-9984-3c5f11833e10", 00:10:26.473 "strip_size_kb": 64, 00:10:26.473 "state": "online", 00:10:26.473 "raid_level": "raid0", 00:10:26.473 "superblock": false, 00:10:26.473 "num_base_bdevs": 4, 00:10:26.473 "num_base_bdevs_discovered": 4, 00:10:26.473 "num_base_bdevs_operational": 4, 00:10:26.473 "base_bdevs_list": [ 00:10:26.473 { 00:10:26.473 "name": "NewBaseBdev", 00:10:26.473 "uuid": "1fce237f-5194-4b07-822f-aff2d3bbc810", 00:10:26.473 "is_configured": true, 00:10:26.473 "data_offset": 0, 00:10:26.473 "data_size": 65536 00:10:26.473 }, 00:10:26.473 { 00:10:26.473 "name": "BaseBdev2", 00:10:26.473 "uuid": "7470b670-6caa-4ce3-886f-81382c76d2d9", 00:10:26.473 "is_configured": true, 00:10:26.473 "data_offset": 0, 00:10:26.473 "data_size": 65536 00:10:26.473 }, 00:10:26.473 { 00:10:26.473 "name": "BaseBdev3", 00:10:26.473 "uuid": "e0ae3d0c-f912-4454-935e-6be8f16fabfc", 00:10:26.473 "is_configured": true, 00:10:26.473 "data_offset": 0, 00:10:26.473 "data_size": 65536 00:10:26.473 }, 00:10:26.473 { 00:10:26.473 "name": "BaseBdev4", 00:10:26.473 "uuid": "329b3418-3370-4f7f-85bf-2b809718dfe4", 00:10:26.473 "is_configured": true, 00:10:26.473 "data_offset": 0, 00:10:26.473 "data_size": 65536 00:10:26.473 } 00:10:26.473 ] 00:10:26.473 } 00:10:26.473 } 00:10:26.473 }' 00:10:26.473 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.473 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:26.473 BaseBdev2 00:10:26.473 BaseBdev3 00:10:26.473 BaseBdev4' 00:10:26.473 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.734 17:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.734 [2024-12-07 17:27:00.080199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.734 [2024-12-07 17:27:00.080327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.734 [2024-12-07 17:27:00.080436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.734 [2024-12-07 17:27:00.080529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.734 [2024-12-07 17:27:00.080574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69394 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69394 ']' 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69394 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.734 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69394 00:10:26.995 killing process with pid 69394 00:10:26.995 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.995 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.995 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69394' 00:10:26.995 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69394 00:10:26.995 [2024-12-07 17:27:00.126231] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.995 17:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69394 00:10:27.253 [2024-12-07 17:27:00.555773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:28.630 00:10:28.630 real 0m11.448s 00:10:28.630 user 0m17.942s 00:10:28.630 sys 0m2.093s 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.630 ************************************ 00:10:28.630 END TEST raid_state_function_test 00:10:28.630 ************************************ 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.630 17:27:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:28.630 17:27:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:28.630 17:27:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.630 17:27:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.630 ************************************ 00:10:28.630 START TEST raid_state_function_test_sb 00:10:28.630 ************************************ 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.630 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:28.631 Process raid pid: 70067 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70067 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70067' 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70067 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70067 ']' 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.631 17:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.631 [2024-12-07 17:27:01.926022] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:28.631 [2024-12-07 17:27:01.926234] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.889 [2024-12-07 17:27:02.100233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.889 [2024-12-07 17:27:02.236349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.148 [2024-12-07 17:27:02.473918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.148 [2024-12-07 17:27:02.474070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.406 17:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.406 17:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:29.406 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.406 17:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.406 17:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.406 [2024-12-07 17:27:02.751763] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.406 [2024-12-07 17:27:02.751838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.407 [2024-12-07 17:27:02.751849] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.407 [2024-12-07 17:27:02.751859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.407 [2024-12-07 17:27:02.751865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.407 [2024-12-07 17:27:02.751875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.407 [2024-12-07 17:27:02.751882] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:29.407 [2024-12-07 17:27:02.751890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.407 17:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.665 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.665 "name": "Existed_Raid", 00:10:29.665 "uuid": "9b0ac10f-aec9-4b59-9c43-8ea2521e28e7", 00:10:29.665 "strip_size_kb": 64, 00:10:29.665 "state": "configuring", 00:10:29.665 "raid_level": "raid0", 00:10:29.665 "superblock": true, 00:10:29.665 "num_base_bdevs": 4, 00:10:29.665 "num_base_bdevs_discovered": 0, 00:10:29.665 "num_base_bdevs_operational": 4, 00:10:29.665 "base_bdevs_list": [ 00:10:29.665 { 00:10:29.665 "name": "BaseBdev1", 00:10:29.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.665 "is_configured": false, 00:10:29.665 "data_offset": 0, 00:10:29.665 "data_size": 0 00:10:29.665 }, 00:10:29.665 { 00:10:29.665 "name": "BaseBdev2", 00:10:29.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.665 "is_configured": false, 00:10:29.665 "data_offset": 0, 00:10:29.665 "data_size": 0 00:10:29.665 }, 00:10:29.665 { 00:10:29.665 "name": "BaseBdev3", 00:10:29.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.665 "is_configured": false, 00:10:29.665 "data_offset": 0, 00:10:29.665 "data_size": 0 00:10:29.665 }, 00:10:29.665 { 00:10:29.665 "name": "BaseBdev4", 00:10:29.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.665 "is_configured": false, 00:10:29.665 "data_offset": 0, 00:10:29.665 "data_size": 0 00:10:29.665 } 00:10:29.665 ] 00:10:29.665 }' 00:10:29.665 17:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.665 17:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.923 [2024-12-07 17:27:03.171098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.923 [2024-12-07 17:27:03.171240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.923 [2024-12-07 17:27:03.183043] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.923 [2024-12-07 17:27:03.183144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.923 [2024-12-07 17:27:03.183171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.923 [2024-12-07 17:27:03.183195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.923 [2024-12-07 17:27:03.183212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.923 [2024-12-07 17:27:03.183234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.923 [2024-12-07 17:27:03.183251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:29.923 [2024-12-07 17:27:03.183272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.923 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.923 [2024-12-07 17:27:03.235976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.924 BaseBdev1 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.924 [ 00:10:29.924 { 00:10:29.924 "name": "BaseBdev1", 00:10:29.924 "aliases": [ 00:10:29.924 "25a9a117-1b9e-4b9c-b5ee-934a0ab01c75" 00:10:29.924 ], 00:10:29.924 "product_name": "Malloc disk", 00:10:29.924 "block_size": 512, 00:10:29.924 "num_blocks": 65536, 00:10:29.924 "uuid": "25a9a117-1b9e-4b9c-b5ee-934a0ab01c75", 00:10:29.924 "assigned_rate_limits": { 00:10:29.924 "rw_ios_per_sec": 0, 00:10:29.924 "rw_mbytes_per_sec": 0, 00:10:29.924 "r_mbytes_per_sec": 0, 00:10:29.924 "w_mbytes_per_sec": 0 00:10:29.924 }, 00:10:29.924 "claimed": true, 00:10:29.924 "claim_type": "exclusive_write", 00:10:29.924 "zoned": false, 00:10:29.924 "supported_io_types": { 00:10:29.924 "read": true, 00:10:29.924 "write": true, 00:10:29.924 "unmap": true, 00:10:29.924 "flush": true, 00:10:29.924 "reset": true, 00:10:29.924 "nvme_admin": false, 00:10:29.924 "nvme_io": false, 00:10:29.924 "nvme_io_md": false, 00:10:29.924 "write_zeroes": true, 00:10:29.924 "zcopy": true, 00:10:29.924 "get_zone_info": false, 00:10:29.924 "zone_management": false, 00:10:29.924 "zone_append": false, 00:10:29.924 "compare": false, 00:10:29.924 "compare_and_write": false, 00:10:29.924 "abort": true, 00:10:29.924 "seek_hole": false, 00:10:29.924 "seek_data": false, 00:10:29.924 "copy": true, 00:10:29.924 "nvme_iov_md": false 00:10:29.924 }, 00:10:29.924 "memory_domains": [ 00:10:29.924 { 00:10:29.924 "dma_device_id": "system", 00:10:29.924 "dma_device_type": 1 00:10:29.924 }, 00:10:29.924 { 00:10:29.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.924 "dma_device_type": 2 00:10:29.924 } 00:10:29.924 ], 00:10:29.924 "driver_specific": {} 00:10:29.924 } 00:10:29.924 ] 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.924 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.183 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.183 "name": "Existed_Raid", 00:10:30.183 "uuid": "90f4ecae-41d9-44fd-bc27-409b43e47de0", 00:10:30.183 "strip_size_kb": 64, 00:10:30.183 "state": "configuring", 00:10:30.183 "raid_level": "raid0", 00:10:30.183 "superblock": true, 00:10:30.183 "num_base_bdevs": 4, 00:10:30.183 "num_base_bdevs_discovered": 1, 00:10:30.183 "num_base_bdevs_operational": 4, 00:10:30.183 "base_bdevs_list": [ 00:10:30.183 { 00:10:30.183 "name": "BaseBdev1", 00:10:30.183 "uuid": "25a9a117-1b9e-4b9c-b5ee-934a0ab01c75", 00:10:30.183 "is_configured": true, 00:10:30.183 "data_offset": 2048, 00:10:30.183 "data_size": 63488 00:10:30.183 }, 00:10:30.183 { 00:10:30.183 "name": "BaseBdev2", 00:10:30.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.183 "is_configured": false, 00:10:30.183 "data_offset": 0, 00:10:30.183 "data_size": 0 00:10:30.183 }, 00:10:30.183 { 00:10:30.183 "name": "BaseBdev3", 00:10:30.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.183 "is_configured": false, 00:10:30.183 "data_offset": 0, 00:10:30.183 "data_size": 0 00:10:30.183 }, 00:10:30.183 { 00:10:30.183 "name": "BaseBdev4", 00:10:30.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.183 "is_configured": false, 00:10:30.183 "data_offset": 0, 00:10:30.183 "data_size": 0 00:10:30.183 } 00:10:30.183 ] 00:10:30.183 }' 00:10:30.183 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.183 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.443 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.443 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.443 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.443 [2024-12-07 17:27:03.719222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.443 [2024-12-07 17:27:03.719381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:30.443 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.443 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.443 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.444 [2024-12-07 17:27:03.731228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.444 [2024-12-07 17:27:03.733457] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.444 [2024-12-07 17:27:03.733541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.444 [2024-12-07 17:27:03.733570] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.444 [2024-12-07 17:27:03.733595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.444 [2024-12-07 17:27:03.733613] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:30.444 [2024-12-07 17:27:03.733634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.444 "name": "Existed_Raid", 00:10:30.444 "uuid": "db40b798-08f1-43cd-8e0b-ec264d7d6ea0", 00:10:30.444 "strip_size_kb": 64, 00:10:30.444 "state": "configuring", 00:10:30.444 "raid_level": "raid0", 00:10:30.444 "superblock": true, 00:10:30.444 "num_base_bdevs": 4, 00:10:30.444 "num_base_bdevs_discovered": 1, 00:10:30.444 "num_base_bdevs_operational": 4, 00:10:30.444 "base_bdevs_list": [ 00:10:30.444 { 00:10:30.444 "name": "BaseBdev1", 00:10:30.444 "uuid": "25a9a117-1b9e-4b9c-b5ee-934a0ab01c75", 00:10:30.444 "is_configured": true, 00:10:30.444 "data_offset": 2048, 00:10:30.444 "data_size": 63488 00:10:30.444 }, 00:10:30.444 { 00:10:30.444 "name": "BaseBdev2", 00:10:30.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.444 "is_configured": false, 00:10:30.444 "data_offset": 0, 00:10:30.444 "data_size": 0 00:10:30.444 }, 00:10:30.444 { 00:10:30.444 "name": "BaseBdev3", 00:10:30.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.444 "is_configured": false, 00:10:30.444 "data_offset": 0, 00:10:30.444 "data_size": 0 00:10:30.444 }, 00:10:30.444 { 00:10:30.444 "name": "BaseBdev4", 00:10:30.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.444 "is_configured": false, 00:10:30.444 "data_offset": 0, 00:10:30.444 "data_size": 0 00:10:30.444 } 00:10:30.444 ] 00:10:30.444 }' 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.444 17:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.013 [2024-12-07 17:27:04.194879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.013 BaseBdev2 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.013 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.013 [ 00:10:31.013 { 00:10:31.013 "name": "BaseBdev2", 00:10:31.013 "aliases": [ 00:10:31.013 "df7035d9-8a32-4dc8-a670-c34f276d67e4" 00:10:31.013 ], 00:10:31.013 "product_name": "Malloc disk", 00:10:31.013 "block_size": 512, 00:10:31.013 "num_blocks": 65536, 00:10:31.013 "uuid": "df7035d9-8a32-4dc8-a670-c34f276d67e4", 00:10:31.013 "assigned_rate_limits": { 00:10:31.013 "rw_ios_per_sec": 0, 00:10:31.013 "rw_mbytes_per_sec": 0, 00:10:31.013 "r_mbytes_per_sec": 0, 00:10:31.013 "w_mbytes_per_sec": 0 00:10:31.013 }, 00:10:31.013 "claimed": true, 00:10:31.013 "claim_type": "exclusive_write", 00:10:31.013 "zoned": false, 00:10:31.013 "supported_io_types": { 00:10:31.013 "read": true, 00:10:31.013 "write": true, 00:10:31.013 "unmap": true, 00:10:31.013 "flush": true, 00:10:31.013 "reset": true, 00:10:31.013 "nvme_admin": false, 00:10:31.013 "nvme_io": false, 00:10:31.013 "nvme_io_md": false, 00:10:31.013 "write_zeroes": true, 00:10:31.013 "zcopy": true, 00:10:31.013 "get_zone_info": false, 00:10:31.013 "zone_management": false, 00:10:31.013 "zone_append": false, 00:10:31.014 "compare": false, 00:10:31.014 "compare_and_write": false, 00:10:31.014 "abort": true, 00:10:31.014 "seek_hole": false, 00:10:31.014 "seek_data": false, 00:10:31.014 "copy": true, 00:10:31.014 "nvme_iov_md": false 00:10:31.014 }, 00:10:31.014 "memory_domains": [ 00:10:31.014 { 00:10:31.014 "dma_device_id": "system", 00:10:31.014 "dma_device_type": 1 00:10:31.014 }, 00:10:31.014 { 00:10:31.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.014 "dma_device_type": 2 00:10:31.014 } 00:10:31.014 ], 00:10:31.014 "driver_specific": {} 00:10:31.014 } 00:10:31.014 ] 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.014 "name": "Existed_Raid", 00:10:31.014 "uuid": "db40b798-08f1-43cd-8e0b-ec264d7d6ea0", 00:10:31.014 "strip_size_kb": 64, 00:10:31.014 "state": "configuring", 00:10:31.014 "raid_level": "raid0", 00:10:31.014 "superblock": true, 00:10:31.014 "num_base_bdevs": 4, 00:10:31.014 "num_base_bdevs_discovered": 2, 00:10:31.014 "num_base_bdevs_operational": 4, 00:10:31.014 "base_bdevs_list": [ 00:10:31.014 { 00:10:31.014 "name": "BaseBdev1", 00:10:31.014 "uuid": "25a9a117-1b9e-4b9c-b5ee-934a0ab01c75", 00:10:31.014 "is_configured": true, 00:10:31.014 "data_offset": 2048, 00:10:31.014 "data_size": 63488 00:10:31.014 }, 00:10:31.014 { 00:10:31.014 "name": "BaseBdev2", 00:10:31.014 "uuid": "df7035d9-8a32-4dc8-a670-c34f276d67e4", 00:10:31.014 "is_configured": true, 00:10:31.014 "data_offset": 2048, 00:10:31.014 "data_size": 63488 00:10:31.014 }, 00:10:31.014 { 00:10:31.014 "name": "BaseBdev3", 00:10:31.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.014 "is_configured": false, 00:10:31.014 "data_offset": 0, 00:10:31.014 "data_size": 0 00:10:31.014 }, 00:10:31.014 { 00:10:31.014 "name": "BaseBdev4", 00:10:31.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.014 "is_configured": false, 00:10:31.014 "data_offset": 0, 00:10:31.014 "data_size": 0 00:10:31.014 } 00:10:31.014 ] 00:10:31.014 }' 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.014 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.581 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:31.581 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.581 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.581 [2024-12-07 17:27:04.732811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.581 BaseBdev3 00:10:31.581 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.581 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:31.581 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.582 [ 00:10:31.582 { 00:10:31.582 "name": "BaseBdev3", 00:10:31.582 "aliases": [ 00:10:31.582 "82581662-a2a7-4893-bdc4-9ee722ae4c90" 00:10:31.582 ], 00:10:31.582 "product_name": "Malloc disk", 00:10:31.582 "block_size": 512, 00:10:31.582 "num_blocks": 65536, 00:10:31.582 "uuid": "82581662-a2a7-4893-bdc4-9ee722ae4c90", 00:10:31.582 "assigned_rate_limits": { 00:10:31.582 "rw_ios_per_sec": 0, 00:10:31.582 "rw_mbytes_per_sec": 0, 00:10:31.582 "r_mbytes_per_sec": 0, 00:10:31.582 "w_mbytes_per_sec": 0 00:10:31.582 }, 00:10:31.582 "claimed": true, 00:10:31.582 "claim_type": "exclusive_write", 00:10:31.582 "zoned": false, 00:10:31.582 "supported_io_types": { 00:10:31.582 "read": true, 00:10:31.582 "write": true, 00:10:31.582 "unmap": true, 00:10:31.582 "flush": true, 00:10:31.582 "reset": true, 00:10:31.582 "nvme_admin": false, 00:10:31.582 "nvme_io": false, 00:10:31.582 "nvme_io_md": false, 00:10:31.582 "write_zeroes": true, 00:10:31.582 "zcopy": true, 00:10:31.582 "get_zone_info": false, 00:10:31.582 "zone_management": false, 00:10:31.582 "zone_append": false, 00:10:31.582 "compare": false, 00:10:31.582 "compare_and_write": false, 00:10:31.582 "abort": true, 00:10:31.582 "seek_hole": false, 00:10:31.582 "seek_data": false, 00:10:31.582 "copy": true, 00:10:31.582 "nvme_iov_md": false 00:10:31.582 }, 00:10:31.582 "memory_domains": [ 00:10:31.582 { 00:10:31.582 "dma_device_id": "system", 00:10:31.582 "dma_device_type": 1 00:10:31.582 }, 00:10:31.582 { 00:10:31.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.582 "dma_device_type": 2 00:10:31.582 } 00:10:31.582 ], 00:10:31.582 "driver_specific": {} 00:10:31.582 } 00:10:31.582 ] 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.582 "name": "Existed_Raid", 00:10:31.582 "uuid": "db40b798-08f1-43cd-8e0b-ec264d7d6ea0", 00:10:31.582 "strip_size_kb": 64, 00:10:31.582 "state": "configuring", 00:10:31.582 "raid_level": "raid0", 00:10:31.582 "superblock": true, 00:10:31.582 "num_base_bdevs": 4, 00:10:31.582 "num_base_bdevs_discovered": 3, 00:10:31.582 "num_base_bdevs_operational": 4, 00:10:31.582 "base_bdevs_list": [ 00:10:31.582 { 00:10:31.582 "name": "BaseBdev1", 00:10:31.582 "uuid": "25a9a117-1b9e-4b9c-b5ee-934a0ab01c75", 00:10:31.582 "is_configured": true, 00:10:31.582 "data_offset": 2048, 00:10:31.582 "data_size": 63488 00:10:31.582 }, 00:10:31.582 { 00:10:31.582 "name": "BaseBdev2", 00:10:31.582 "uuid": "df7035d9-8a32-4dc8-a670-c34f276d67e4", 00:10:31.582 "is_configured": true, 00:10:31.582 "data_offset": 2048, 00:10:31.582 "data_size": 63488 00:10:31.582 }, 00:10:31.582 { 00:10:31.582 "name": "BaseBdev3", 00:10:31.582 "uuid": "82581662-a2a7-4893-bdc4-9ee722ae4c90", 00:10:31.582 "is_configured": true, 00:10:31.582 "data_offset": 2048, 00:10:31.582 "data_size": 63488 00:10:31.582 }, 00:10:31.582 { 00:10:31.582 "name": "BaseBdev4", 00:10:31.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.582 "is_configured": false, 00:10:31.582 "data_offset": 0, 00:10:31.582 "data_size": 0 00:10:31.582 } 00:10:31.582 ] 00:10:31.582 }' 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.582 17:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.841 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:31.841 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.841 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.099 [2024-12-07 17:27:05.246636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:32.099 [2024-12-07 17:27:05.246924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:32.099 [2024-12-07 17:27:05.246960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:32.099 [2024-12-07 17:27:05.247266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:32.099 BaseBdev4 00:10:32.099 [2024-12-07 17:27:05.247419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:32.099 [2024-12-07 17:27:05.247438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:32.099 [2024-12-07 17:27:05.247581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.099 [ 00:10:32.099 { 00:10:32.099 "name": "BaseBdev4", 00:10:32.099 "aliases": [ 00:10:32.099 "43c30d40-7795-4442-bcb8-0a535a55dfa8" 00:10:32.099 ], 00:10:32.099 "product_name": "Malloc disk", 00:10:32.099 "block_size": 512, 00:10:32.099 "num_blocks": 65536, 00:10:32.099 "uuid": "43c30d40-7795-4442-bcb8-0a535a55dfa8", 00:10:32.099 "assigned_rate_limits": { 00:10:32.099 "rw_ios_per_sec": 0, 00:10:32.099 "rw_mbytes_per_sec": 0, 00:10:32.099 "r_mbytes_per_sec": 0, 00:10:32.099 "w_mbytes_per_sec": 0 00:10:32.099 }, 00:10:32.099 "claimed": true, 00:10:32.099 "claim_type": "exclusive_write", 00:10:32.099 "zoned": false, 00:10:32.099 "supported_io_types": { 00:10:32.099 "read": true, 00:10:32.099 "write": true, 00:10:32.099 "unmap": true, 00:10:32.099 "flush": true, 00:10:32.099 "reset": true, 00:10:32.099 "nvme_admin": false, 00:10:32.099 "nvme_io": false, 00:10:32.099 "nvme_io_md": false, 00:10:32.099 "write_zeroes": true, 00:10:32.099 "zcopy": true, 00:10:32.099 "get_zone_info": false, 00:10:32.099 "zone_management": false, 00:10:32.099 "zone_append": false, 00:10:32.099 "compare": false, 00:10:32.099 "compare_and_write": false, 00:10:32.099 "abort": true, 00:10:32.099 "seek_hole": false, 00:10:32.099 "seek_data": false, 00:10:32.099 "copy": true, 00:10:32.099 "nvme_iov_md": false 00:10:32.099 }, 00:10:32.099 "memory_domains": [ 00:10:32.099 { 00:10:32.099 "dma_device_id": "system", 00:10:32.099 "dma_device_type": 1 00:10:32.099 }, 00:10:32.099 { 00:10:32.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.099 "dma_device_type": 2 00:10:32.099 } 00:10:32.099 ], 00:10:32.099 "driver_specific": {} 00:10:32.099 } 00:10:32.099 ] 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.099 "name": "Existed_Raid", 00:10:32.099 "uuid": "db40b798-08f1-43cd-8e0b-ec264d7d6ea0", 00:10:32.099 "strip_size_kb": 64, 00:10:32.099 "state": "online", 00:10:32.099 "raid_level": "raid0", 00:10:32.099 "superblock": true, 00:10:32.099 "num_base_bdevs": 4, 00:10:32.099 "num_base_bdevs_discovered": 4, 00:10:32.099 "num_base_bdevs_operational": 4, 00:10:32.099 "base_bdevs_list": [ 00:10:32.099 { 00:10:32.099 "name": "BaseBdev1", 00:10:32.099 "uuid": "25a9a117-1b9e-4b9c-b5ee-934a0ab01c75", 00:10:32.099 "is_configured": true, 00:10:32.099 "data_offset": 2048, 00:10:32.099 "data_size": 63488 00:10:32.099 }, 00:10:32.099 { 00:10:32.099 "name": "BaseBdev2", 00:10:32.099 "uuid": "df7035d9-8a32-4dc8-a670-c34f276d67e4", 00:10:32.099 "is_configured": true, 00:10:32.099 "data_offset": 2048, 00:10:32.099 "data_size": 63488 00:10:32.099 }, 00:10:32.099 { 00:10:32.099 "name": "BaseBdev3", 00:10:32.099 "uuid": "82581662-a2a7-4893-bdc4-9ee722ae4c90", 00:10:32.099 "is_configured": true, 00:10:32.099 "data_offset": 2048, 00:10:32.099 "data_size": 63488 00:10:32.099 }, 00:10:32.099 { 00:10:32.099 "name": "BaseBdev4", 00:10:32.099 "uuid": "43c30d40-7795-4442-bcb8-0a535a55dfa8", 00:10:32.099 "is_configured": true, 00:10:32.099 "data_offset": 2048, 00:10:32.099 "data_size": 63488 00:10:32.099 } 00:10:32.099 ] 00:10:32.099 }' 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.099 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.357 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.357 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:32.357 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.357 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.357 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.357 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.357 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.357 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:32.615 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.615 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.615 [2024-12-07 17:27:05.742355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.615 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.615 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.615 "name": "Existed_Raid", 00:10:32.615 "aliases": [ 00:10:32.615 "db40b798-08f1-43cd-8e0b-ec264d7d6ea0" 00:10:32.615 ], 00:10:32.615 "product_name": "Raid Volume", 00:10:32.615 "block_size": 512, 00:10:32.615 "num_blocks": 253952, 00:10:32.615 "uuid": "db40b798-08f1-43cd-8e0b-ec264d7d6ea0", 00:10:32.615 "assigned_rate_limits": { 00:10:32.615 "rw_ios_per_sec": 0, 00:10:32.615 "rw_mbytes_per_sec": 0, 00:10:32.615 "r_mbytes_per_sec": 0, 00:10:32.615 "w_mbytes_per_sec": 0 00:10:32.615 }, 00:10:32.615 "claimed": false, 00:10:32.615 "zoned": false, 00:10:32.615 "supported_io_types": { 00:10:32.615 "read": true, 00:10:32.615 "write": true, 00:10:32.615 "unmap": true, 00:10:32.615 "flush": true, 00:10:32.615 "reset": true, 00:10:32.615 "nvme_admin": false, 00:10:32.615 "nvme_io": false, 00:10:32.615 "nvme_io_md": false, 00:10:32.615 "write_zeroes": true, 00:10:32.615 "zcopy": false, 00:10:32.615 "get_zone_info": false, 00:10:32.615 "zone_management": false, 00:10:32.615 "zone_append": false, 00:10:32.615 "compare": false, 00:10:32.615 "compare_and_write": false, 00:10:32.615 "abort": false, 00:10:32.615 "seek_hole": false, 00:10:32.615 "seek_data": false, 00:10:32.615 "copy": false, 00:10:32.615 "nvme_iov_md": false 00:10:32.615 }, 00:10:32.615 "memory_domains": [ 00:10:32.615 { 00:10:32.615 "dma_device_id": "system", 00:10:32.615 "dma_device_type": 1 00:10:32.615 }, 00:10:32.615 { 00:10:32.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.615 "dma_device_type": 2 00:10:32.615 }, 00:10:32.615 { 00:10:32.615 "dma_device_id": "system", 00:10:32.615 "dma_device_type": 1 00:10:32.615 }, 00:10:32.615 { 00:10:32.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.615 "dma_device_type": 2 00:10:32.615 }, 00:10:32.615 { 00:10:32.615 "dma_device_id": "system", 00:10:32.615 "dma_device_type": 1 00:10:32.615 }, 00:10:32.615 { 00:10:32.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.615 "dma_device_type": 2 00:10:32.615 }, 00:10:32.615 { 00:10:32.615 "dma_device_id": "system", 00:10:32.615 "dma_device_type": 1 00:10:32.615 }, 00:10:32.615 { 00:10:32.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.616 "dma_device_type": 2 00:10:32.616 } 00:10:32.616 ], 00:10:32.616 "driver_specific": { 00:10:32.616 "raid": { 00:10:32.616 "uuid": "db40b798-08f1-43cd-8e0b-ec264d7d6ea0", 00:10:32.616 "strip_size_kb": 64, 00:10:32.616 "state": "online", 00:10:32.616 "raid_level": "raid0", 00:10:32.616 "superblock": true, 00:10:32.616 "num_base_bdevs": 4, 00:10:32.616 "num_base_bdevs_discovered": 4, 00:10:32.616 "num_base_bdevs_operational": 4, 00:10:32.616 "base_bdevs_list": [ 00:10:32.616 { 00:10:32.616 "name": "BaseBdev1", 00:10:32.616 "uuid": "25a9a117-1b9e-4b9c-b5ee-934a0ab01c75", 00:10:32.616 "is_configured": true, 00:10:32.616 "data_offset": 2048, 00:10:32.616 "data_size": 63488 00:10:32.616 }, 00:10:32.616 { 00:10:32.616 "name": "BaseBdev2", 00:10:32.616 "uuid": "df7035d9-8a32-4dc8-a670-c34f276d67e4", 00:10:32.616 "is_configured": true, 00:10:32.616 "data_offset": 2048, 00:10:32.616 "data_size": 63488 00:10:32.616 }, 00:10:32.616 { 00:10:32.616 "name": "BaseBdev3", 00:10:32.616 "uuid": "82581662-a2a7-4893-bdc4-9ee722ae4c90", 00:10:32.616 "is_configured": true, 00:10:32.616 "data_offset": 2048, 00:10:32.616 "data_size": 63488 00:10:32.616 }, 00:10:32.616 { 00:10:32.616 "name": "BaseBdev4", 00:10:32.616 "uuid": "43c30d40-7795-4442-bcb8-0a535a55dfa8", 00:10:32.616 "is_configured": true, 00:10:32.616 "data_offset": 2048, 00:10:32.616 "data_size": 63488 00:10:32.616 } 00:10:32.616 ] 00:10:32.616 } 00:10:32.616 } 00:10:32.616 }' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:32.616 BaseBdev2 00:10:32.616 BaseBdev3 00:10:32.616 BaseBdev4' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.616 17:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.874 17:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.874 [2024-12-07 17:27:06.049481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.874 [2024-12-07 17:27:06.049527] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.874 [2024-12-07 17:27:06.049583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.874 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.875 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.875 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.875 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.875 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.875 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.875 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.875 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.875 "name": "Existed_Raid", 00:10:32.875 "uuid": "db40b798-08f1-43cd-8e0b-ec264d7d6ea0", 00:10:32.875 "strip_size_kb": 64, 00:10:32.875 "state": "offline", 00:10:32.875 "raid_level": "raid0", 00:10:32.875 "superblock": true, 00:10:32.875 "num_base_bdevs": 4, 00:10:32.875 "num_base_bdevs_discovered": 3, 00:10:32.875 "num_base_bdevs_operational": 3, 00:10:32.875 "base_bdevs_list": [ 00:10:32.875 { 00:10:32.875 "name": null, 00:10:32.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.875 "is_configured": false, 00:10:32.875 "data_offset": 0, 00:10:32.875 "data_size": 63488 00:10:32.875 }, 00:10:32.875 { 00:10:32.875 "name": "BaseBdev2", 00:10:32.875 "uuid": "df7035d9-8a32-4dc8-a670-c34f276d67e4", 00:10:32.875 "is_configured": true, 00:10:32.875 "data_offset": 2048, 00:10:32.875 "data_size": 63488 00:10:32.875 }, 00:10:32.875 { 00:10:32.875 "name": "BaseBdev3", 00:10:32.875 "uuid": "82581662-a2a7-4893-bdc4-9ee722ae4c90", 00:10:32.875 "is_configured": true, 00:10:32.875 "data_offset": 2048, 00:10:32.875 "data_size": 63488 00:10:32.875 }, 00:10:32.875 { 00:10:32.875 "name": "BaseBdev4", 00:10:32.875 "uuid": "43c30d40-7795-4442-bcb8-0a535a55dfa8", 00:10:32.875 "is_configured": true, 00:10:32.875 "data_offset": 2048, 00:10:32.875 "data_size": 63488 00:10:32.875 } 00:10:32.875 ] 00:10:32.875 }' 00:10:32.875 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.875 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.440 [2024-12-07 17:27:06.616622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.440 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.440 [2024-12-07 17:27:06.773972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.696 17:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.696 [2024-12-07 17:27:06.932896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:33.696 [2024-12-07 17:27:06.933063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:33.696 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.696 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.696 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.696 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.696 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:33.696 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.696 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.696 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.954 BaseBdev2 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.954 [ 00:10:33.954 { 00:10:33.954 "name": "BaseBdev2", 00:10:33.954 "aliases": [ 00:10:33.954 "079b017b-0a4e-48e9-9a9f-b22b89fb62d7" 00:10:33.954 ], 00:10:33.954 "product_name": "Malloc disk", 00:10:33.954 "block_size": 512, 00:10:33.954 "num_blocks": 65536, 00:10:33.954 "uuid": "079b017b-0a4e-48e9-9a9f-b22b89fb62d7", 00:10:33.954 "assigned_rate_limits": { 00:10:33.954 "rw_ios_per_sec": 0, 00:10:33.954 "rw_mbytes_per_sec": 0, 00:10:33.954 "r_mbytes_per_sec": 0, 00:10:33.954 "w_mbytes_per_sec": 0 00:10:33.954 }, 00:10:33.954 "claimed": false, 00:10:33.954 "zoned": false, 00:10:33.954 "supported_io_types": { 00:10:33.954 "read": true, 00:10:33.954 "write": true, 00:10:33.954 "unmap": true, 00:10:33.954 "flush": true, 00:10:33.954 "reset": true, 00:10:33.954 "nvme_admin": false, 00:10:33.954 "nvme_io": false, 00:10:33.954 "nvme_io_md": false, 00:10:33.954 "write_zeroes": true, 00:10:33.954 "zcopy": true, 00:10:33.954 "get_zone_info": false, 00:10:33.954 "zone_management": false, 00:10:33.954 "zone_append": false, 00:10:33.954 "compare": false, 00:10:33.954 "compare_and_write": false, 00:10:33.954 "abort": true, 00:10:33.954 "seek_hole": false, 00:10:33.954 "seek_data": false, 00:10:33.954 "copy": true, 00:10:33.954 "nvme_iov_md": false 00:10:33.954 }, 00:10:33.954 "memory_domains": [ 00:10:33.954 { 00:10:33.954 "dma_device_id": "system", 00:10:33.954 "dma_device_type": 1 00:10:33.954 }, 00:10:33.954 { 00:10:33.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.954 "dma_device_type": 2 00:10:33.954 } 00:10:33.954 ], 00:10:33.954 "driver_specific": {} 00:10:33.954 } 00:10:33.954 ] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.954 BaseBdev3 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.954 [ 00:10:33.954 { 00:10:33.954 "name": "BaseBdev3", 00:10:33.954 "aliases": [ 00:10:33.954 "8113a0ad-624f-4f95-a811-0f2527fbb5c0" 00:10:33.954 ], 00:10:33.954 "product_name": "Malloc disk", 00:10:33.954 "block_size": 512, 00:10:33.954 "num_blocks": 65536, 00:10:33.954 "uuid": "8113a0ad-624f-4f95-a811-0f2527fbb5c0", 00:10:33.954 "assigned_rate_limits": { 00:10:33.954 "rw_ios_per_sec": 0, 00:10:33.954 "rw_mbytes_per_sec": 0, 00:10:33.954 "r_mbytes_per_sec": 0, 00:10:33.954 "w_mbytes_per_sec": 0 00:10:33.954 }, 00:10:33.954 "claimed": false, 00:10:33.954 "zoned": false, 00:10:33.954 "supported_io_types": { 00:10:33.954 "read": true, 00:10:33.954 "write": true, 00:10:33.954 "unmap": true, 00:10:33.954 "flush": true, 00:10:33.954 "reset": true, 00:10:33.954 "nvme_admin": false, 00:10:33.954 "nvme_io": false, 00:10:33.954 "nvme_io_md": false, 00:10:33.954 "write_zeroes": true, 00:10:33.954 "zcopy": true, 00:10:33.954 "get_zone_info": false, 00:10:33.954 "zone_management": false, 00:10:33.954 "zone_append": false, 00:10:33.954 "compare": false, 00:10:33.954 "compare_and_write": false, 00:10:33.954 "abort": true, 00:10:33.954 "seek_hole": false, 00:10:33.954 "seek_data": false, 00:10:33.954 "copy": true, 00:10:33.954 "nvme_iov_md": false 00:10:33.954 }, 00:10:33.954 "memory_domains": [ 00:10:33.954 { 00:10:33.954 "dma_device_id": "system", 00:10:33.954 "dma_device_type": 1 00:10:33.954 }, 00:10:33.954 { 00:10:33.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.954 "dma_device_type": 2 00:10:33.954 } 00:10:33.954 ], 00:10:33.954 "driver_specific": {} 00:10:33.954 } 00:10:33.954 ] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.954 BaseBdev4 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.954 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.954 [ 00:10:33.954 { 00:10:33.954 "name": "BaseBdev4", 00:10:33.954 "aliases": [ 00:10:33.955 "50e79bd3-0d43-4d85-9a20-2e659309bff0" 00:10:33.955 ], 00:10:33.955 "product_name": "Malloc disk", 00:10:33.955 "block_size": 512, 00:10:33.955 "num_blocks": 65536, 00:10:33.955 "uuid": "50e79bd3-0d43-4d85-9a20-2e659309bff0", 00:10:33.955 "assigned_rate_limits": { 00:10:33.955 "rw_ios_per_sec": 0, 00:10:33.955 "rw_mbytes_per_sec": 0, 00:10:33.955 "r_mbytes_per_sec": 0, 00:10:33.955 "w_mbytes_per_sec": 0 00:10:33.955 }, 00:10:33.955 "claimed": false, 00:10:33.955 "zoned": false, 00:10:33.955 "supported_io_types": { 00:10:33.955 "read": true, 00:10:33.955 "write": true, 00:10:33.955 "unmap": true, 00:10:33.955 "flush": true, 00:10:33.955 "reset": true, 00:10:33.955 "nvme_admin": false, 00:10:33.955 "nvme_io": false, 00:10:33.955 "nvme_io_md": false, 00:10:33.955 "write_zeroes": true, 00:10:33.955 "zcopy": true, 00:10:33.955 "get_zone_info": false, 00:10:33.955 "zone_management": false, 00:10:33.955 "zone_append": false, 00:10:33.955 "compare": false, 00:10:33.955 "compare_and_write": false, 00:10:33.955 "abort": true, 00:10:34.211 "seek_hole": false, 00:10:34.211 "seek_data": false, 00:10:34.211 "copy": true, 00:10:34.211 "nvme_iov_md": false 00:10:34.211 }, 00:10:34.211 "memory_domains": [ 00:10:34.211 { 00:10:34.211 "dma_device_id": "system", 00:10:34.211 "dma_device_type": 1 00:10:34.211 }, 00:10:34.211 { 00:10:34.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.211 "dma_device_type": 2 00:10:34.211 } 00:10:34.211 ], 00:10:34.211 "driver_specific": {} 00:10:34.211 } 00:10:34.211 ] 00:10:34.211 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.211 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:34.211 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.211 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.211 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.211 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.212 [2024-12-07 17:27:07.344824] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.212 [2024-12-07 17:27:07.344965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.212 [2024-12-07 17:27:07.345010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.212 [2024-12-07 17:27:07.347108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.212 [2024-12-07 17:27:07.347200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.212 "name": "Existed_Raid", 00:10:34.212 "uuid": "ce205fac-9224-4817-8430-586ee56b4653", 00:10:34.212 "strip_size_kb": 64, 00:10:34.212 "state": "configuring", 00:10:34.212 "raid_level": "raid0", 00:10:34.212 "superblock": true, 00:10:34.212 "num_base_bdevs": 4, 00:10:34.212 "num_base_bdevs_discovered": 3, 00:10:34.212 "num_base_bdevs_operational": 4, 00:10:34.212 "base_bdevs_list": [ 00:10:34.212 { 00:10:34.212 "name": "BaseBdev1", 00:10:34.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.212 "is_configured": false, 00:10:34.212 "data_offset": 0, 00:10:34.212 "data_size": 0 00:10:34.212 }, 00:10:34.212 { 00:10:34.212 "name": "BaseBdev2", 00:10:34.212 "uuid": "079b017b-0a4e-48e9-9a9f-b22b89fb62d7", 00:10:34.212 "is_configured": true, 00:10:34.212 "data_offset": 2048, 00:10:34.212 "data_size": 63488 00:10:34.212 }, 00:10:34.212 { 00:10:34.212 "name": "BaseBdev3", 00:10:34.212 "uuid": "8113a0ad-624f-4f95-a811-0f2527fbb5c0", 00:10:34.212 "is_configured": true, 00:10:34.212 "data_offset": 2048, 00:10:34.212 "data_size": 63488 00:10:34.212 }, 00:10:34.212 { 00:10:34.212 "name": "BaseBdev4", 00:10:34.212 "uuid": "50e79bd3-0d43-4d85-9a20-2e659309bff0", 00:10:34.212 "is_configured": true, 00:10:34.212 "data_offset": 2048, 00:10:34.212 "data_size": 63488 00:10:34.212 } 00:10:34.212 ] 00:10:34.212 }' 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.212 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.469 [2024-12-07 17:27:07.772164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.469 "name": "Existed_Raid", 00:10:34.469 "uuid": "ce205fac-9224-4817-8430-586ee56b4653", 00:10:34.469 "strip_size_kb": 64, 00:10:34.469 "state": "configuring", 00:10:34.469 "raid_level": "raid0", 00:10:34.469 "superblock": true, 00:10:34.469 "num_base_bdevs": 4, 00:10:34.469 "num_base_bdevs_discovered": 2, 00:10:34.469 "num_base_bdevs_operational": 4, 00:10:34.469 "base_bdevs_list": [ 00:10:34.469 { 00:10:34.469 "name": "BaseBdev1", 00:10:34.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.469 "is_configured": false, 00:10:34.469 "data_offset": 0, 00:10:34.469 "data_size": 0 00:10:34.469 }, 00:10:34.469 { 00:10:34.469 "name": null, 00:10:34.469 "uuid": "079b017b-0a4e-48e9-9a9f-b22b89fb62d7", 00:10:34.469 "is_configured": false, 00:10:34.469 "data_offset": 0, 00:10:34.469 "data_size": 63488 00:10:34.469 }, 00:10:34.469 { 00:10:34.469 "name": "BaseBdev3", 00:10:34.469 "uuid": "8113a0ad-624f-4f95-a811-0f2527fbb5c0", 00:10:34.469 "is_configured": true, 00:10:34.469 "data_offset": 2048, 00:10:34.469 "data_size": 63488 00:10:34.469 }, 00:10:34.469 { 00:10:34.469 "name": "BaseBdev4", 00:10:34.469 "uuid": "50e79bd3-0d43-4d85-9a20-2e659309bff0", 00:10:34.469 "is_configured": true, 00:10:34.469 "data_offset": 2048, 00:10:34.469 "data_size": 63488 00:10:34.469 } 00:10:34.469 ] 00:10:34.469 }' 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.469 17:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.033 [2024-12-07 17:27:08.273655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.033 BaseBdev1 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.033 [ 00:10:35.033 { 00:10:35.033 "name": "BaseBdev1", 00:10:35.033 "aliases": [ 00:10:35.033 "a7ab9e15-3e62-4f13-9974-e93c70f2d82c" 00:10:35.033 ], 00:10:35.033 "product_name": "Malloc disk", 00:10:35.033 "block_size": 512, 00:10:35.033 "num_blocks": 65536, 00:10:35.033 "uuid": "a7ab9e15-3e62-4f13-9974-e93c70f2d82c", 00:10:35.033 "assigned_rate_limits": { 00:10:35.033 "rw_ios_per_sec": 0, 00:10:35.033 "rw_mbytes_per_sec": 0, 00:10:35.033 "r_mbytes_per_sec": 0, 00:10:35.033 "w_mbytes_per_sec": 0 00:10:35.033 }, 00:10:35.033 "claimed": true, 00:10:35.033 "claim_type": "exclusive_write", 00:10:35.033 "zoned": false, 00:10:35.033 "supported_io_types": { 00:10:35.033 "read": true, 00:10:35.033 "write": true, 00:10:35.033 "unmap": true, 00:10:35.033 "flush": true, 00:10:35.033 "reset": true, 00:10:35.033 "nvme_admin": false, 00:10:35.033 "nvme_io": false, 00:10:35.033 "nvme_io_md": false, 00:10:35.033 "write_zeroes": true, 00:10:35.033 "zcopy": true, 00:10:35.033 "get_zone_info": false, 00:10:35.033 "zone_management": false, 00:10:35.033 "zone_append": false, 00:10:35.033 "compare": false, 00:10:35.033 "compare_and_write": false, 00:10:35.033 "abort": true, 00:10:35.033 "seek_hole": false, 00:10:35.033 "seek_data": false, 00:10:35.033 "copy": true, 00:10:35.033 "nvme_iov_md": false 00:10:35.033 }, 00:10:35.033 "memory_domains": [ 00:10:35.033 { 00:10:35.033 "dma_device_id": "system", 00:10:35.033 "dma_device_type": 1 00:10:35.033 }, 00:10:35.033 { 00:10:35.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.033 "dma_device_type": 2 00:10:35.033 } 00:10:35.033 ], 00:10:35.033 "driver_specific": {} 00:10:35.033 } 00:10:35.033 ] 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.033 "name": "Existed_Raid", 00:10:35.033 "uuid": "ce205fac-9224-4817-8430-586ee56b4653", 00:10:35.033 "strip_size_kb": 64, 00:10:35.033 "state": "configuring", 00:10:35.033 "raid_level": "raid0", 00:10:35.033 "superblock": true, 00:10:35.033 "num_base_bdevs": 4, 00:10:35.033 "num_base_bdevs_discovered": 3, 00:10:35.033 "num_base_bdevs_operational": 4, 00:10:35.033 "base_bdevs_list": [ 00:10:35.033 { 00:10:35.033 "name": "BaseBdev1", 00:10:35.033 "uuid": "a7ab9e15-3e62-4f13-9974-e93c70f2d82c", 00:10:35.033 "is_configured": true, 00:10:35.033 "data_offset": 2048, 00:10:35.033 "data_size": 63488 00:10:35.033 }, 00:10:35.033 { 00:10:35.033 "name": null, 00:10:35.033 "uuid": "079b017b-0a4e-48e9-9a9f-b22b89fb62d7", 00:10:35.033 "is_configured": false, 00:10:35.033 "data_offset": 0, 00:10:35.033 "data_size": 63488 00:10:35.033 }, 00:10:35.033 { 00:10:35.033 "name": "BaseBdev3", 00:10:35.033 "uuid": "8113a0ad-624f-4f95-a811-0f2527fbb5c0", 00:10:35.033 "is_configured": true, 00:10:35.033 "data_offset": 2048, 00:10:35.033 "data_size": 63488 00:10:35.033 }, 00:10:35.033 { 00:10:35.033 "name": "BaseBdev4", 00:10:35.033 "uuid": "50e79bd3-0d43-4d85-9a20-2e659309bff0", 00:10:35.033 "is_configured": true, 00:10:35.033 "data_offset": 2048, 00:10:35.033 "data_size": 63488 00:10:35.033 } 00:10:35.033 ] 00:10:35.033 }' 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.033 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.599 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.599 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.599 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.599 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:35.599 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.599 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.600 [2024-12-07 17:27:08.773004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.600 "name": "Existed_Raid", 00:10:35.600 "uuid": "ce205fac-9224-4817-8430-586ee56b4653", 00:10:35.600 "strip_size_kb": 64, 00:10:35.600 "state": "configuring", 00:10:35.600 "raid_level": "raid0", 00:10:35.600 "superblock": true, 00:10:35.600 "num_base_bdevs": 4, 00:10:35.600 "num_base_bdevs_discovered": 2, 00:10:35.600 "num_base_bdevs_operational": 4, 00:10:35.600 "base_bdevs_list": [ 00:10:35.600 { 00:10:35.600 "name": "BaseBdev1", 00:10:35.600 "uuid": "a7ab9e15-3e62-4f13-9974-e93c70f2d82c", 00:10:35.600 "is_configured": true, 00:10:35.600 "data_offset": 2048, 00:10:35.600 "data_size": 63488 00:10:35.600 }, 00:10:35.600 { 00:10:35.600 "name": null, 00:10:35.600 "uuid": "079b017b-0a4e-48e9-9a9f-b22b89fb62d7", 00:10:35.600 "is_configured": false, 00:10:35.600 "data_offset": 0, 00:10:35.600 "data_size": 63488 00:10:35.600 }, 00:10:35.600 { 00:10:35.600 "name": null, 00:10:35.600 "uuid": "8113a0ad-624f-4f95-a811-0f2527fbb5c0", 00:10:35.600 "is_configured": false, 00:10:35.600 "data_offset": 0, 00:10:35.600 "data_size": 63488 00:10:35.600 }, 00:10:35.600 { 00:10:35.600 "name": "BaseBdev4", 00:10:35.600 "uuid": "50e79bd3-0d43-4d85-9a20-2e659309bff0", 00:10:35.600 "is_configured": true, 00:10:35.600 "data_offset": 2048, 00:10:35.600 "data_size": 63488 00:10:35.600 } 00:10:35.600 ] 00:10:35.600 }' 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.600 17:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.894 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:35.894 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.894 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.894 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.181 [2024-12-07 17:27:09.288084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.181 "name": "Existed_Raid", 00:10:36.181 "uuid": "ce205fac-9224-4817-8430-586ee56b4653", 00:10:36.181 "strip_size_kb": 64, 00:10:36.181 "state": "configuring", 00:10:36.181 "raid_level": "raid0", 00:10:36.181 "superblock": true, 00:10:36.181 "num_base_bdevs": 4, 00:10:36.181 "num_base_bdevs_discovered": 3, 00:10:36.181 "num_base_bdevs_operational": 4, 00:10:36.181 "base_bdevs_list": [ 00:10:36.181 { 00:10:36.181 "name": "BaseBdev1", 00:10:36.181 "uuid": "a7ab9e15-3e62-4f13-9974-e93c70f2d82c", 00:10:36.181 "is_configured": true, 00:10:36.181 "data_offset": 2048, 00:10:36.181 "data_size": 63488 00:10:36.181 }, 00:10:36.181 { 00:10:36.181 "name": null, 00:10:36.181 "uuid": "079b017b-0a4e-48e9-9a9f-b22b89fb62d7", 00:10:36.181 "is_configured": false, 00:10:36.181 "data_offset": 0, 00:10:36.181 "data_size": 63488 00:10:36.181 }, 00:10:36.181 { 00:10:36.181 "name": "BaseBdev3", 00:10:36.181 "uuid": "8113a0ad-624f-4f95-a811-0f2527fbb5c0", 00:10:36.181 "is_configured": true, 00:10:36.181 "data_offset": 2048, 00:10:36.181 "data_size": 63488 00:10:36.181 }, 00:10:36.181 { 00:10:36.181 "name": "BaseBdev4", 00:10:36.181 "uuid": "50e79bd3-0d43-4d85-9a20-2e659309bff0", 00:10:36.181 "is_configured": true, 00:10:36.181 "data_offset": 2048, 00:10:36.181 "data_size": 63488 00:10:36.181 } 00:10:36.181 ] 00:10:36.181 }' 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.181 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.441 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:36.441 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.441 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.441 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.441 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.441 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:36.441 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:36.441 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.441 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.441 [2024-12-07 17:27:09.779299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.700 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.700 "name": "Existed_Raid", 00:10:36.700 "uuid": "ce205fac-9224-4817-8430-586ee56b4653", 00:10:36.700 "strip_size_kb": 64, 00:10:36.700 "state": "configuring", 00:10:36.700 "raid_level": "raid0", 00:10:36.700 "superblock": true, 00:10:36.700 "num_base_bdevs": 4, 00:10:36.700 "num_base_bdevs_discovered": 2, 00:10:36.700 "num_base_bdevs_operational": 4, 00:10:36.700 "base_bdevs_list": [ 00:10:36.700 { 00:10:36.700 "name": null, 00:10:36.700 "uuid": "a7ab9e15-3e62-4f13-9974-e93c70f2d82c", 00:10:36.700 "is_configured": false, 00:10:36.700 "data_offset": 0, 00:10:36.700 "data_size": 63488 00:10:36.700 }, 00:10:36.700 { 00:10:36.701 "name": null, 00:10:36.701 "uuid": "079b017b-0a4e-48e9-9a9f-b22b89fb62d7", 00:10:36.701 "is_configured": false, 00:10:36.701 "data_offset": 0, 00:10:36.701 "data_size": 63488 00:10:36.701 }, 00:10:36.701 { 00:10:36.701 "name": "BaseBdev3", 00:10:36.701 "uuid": "8113a0ad-624f-4f95-a811-0f2527fbb5c0", 00:10:36.701 "is_configured": true, 00:10:36.701 "data_offset": 2048, 00:10:36.701 "data_size": 63488 00:10:36.701 }, 00:10:36.701 { 00:10:36.701 "name": "BaseBdev4", 00:10:36.701 "uuid": "50e79bd3-0d43-4d85-9a20-2e659309bff0", 00:10:36.701 "is_configured": true, 00:10:36.701 "data_offset": 2048, 00:10:36.701 "data_size": 63488 00:10:36.701 } 00:10:36.701 ] 00:10:36.701 }' 00:10:36.701 17:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.701 17:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.960 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.960 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.960 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.960 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:36.960 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.219 [2024-12-07 17:27:10.367985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.219 "name": "Existed_Raid", 00:10:37.219 "uuid": "ce205fac-9224-4817-8430-586ee56b4653", 00:10:37.219 "strip_size_kb": 64, 00:10:37.219 "state": "configuring", 00:10:37.219 "raid_level": "raid0", 00:10:37.219 "superblock": true, 00:10:37.219 "num_base_bdevs": 4, 00:10:37.219 "num_base_bdevs_discovered": 3, 00:10:37.219 "num_base_bdevs_operational": 4, 00:10:37.219 "base_bdevs_list": [ 00:10:37.219 { 00:10:37.219 "name": null, 00:10:37.219 "uuid": "a7ab9e15-3e62-4f13-9974-e93c70f2d82c", 00:10:37.219 "is_configured": false, 00:10:37.219 "data_offset": 0, 00:10:37.219 "data_size": 63488 00:10:37.219 }, 00:10:37.219 { 00:10:37.219 "name": "BaseBdev2", 00:10:37.219 "uuid": "079b017b-0a4e-48e9-9a9f-b22b89fb62d7", 00:10:37.219 "is_configured": true, 00:10:37.219 "data_offset": 2048, 00:10:37.219 "data_size": 63488 00:10:37.219 }, 00:10:37.219 { 00:10:37.219 "name": "BaseBdev3", 00:10:37.219 "uuid": "8113a0ad-624f-4f95-a811-0f2527fbb5c0", 00:10:37.219 "is_configured": true, 00:10:37.219 "data_offset": 2048, 00:10:37.219 "data_size": 63488 00:10:37.219 }, 00:10:37.219 { 00:10:37.219 "name": "BaseBdev4", 00:10:37.219 "uuid": "50e79bd3-0d43-4d85-9a20-2e659309bff0", 00:10:37.219 "is_configured": true, 00:10:37.219 "data_offset": 2048, 00:10:37.219 "data_size": 63488 00:10:37.219 } 00:10:37.219 ] 00:10:37.219 }' 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.219 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.479 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a7ab9e15-3e62-4f13-9974-e93c70f2d82c 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.738 [2024-12-07 17:27:10.906424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:37.738 [2024-12-07 17:27:10.906710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:37.738 [2024-12-07 17:27:10.906724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:37.738 [2024-12-07 17:27:10.907048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:37.738 [2024-12-07 17:27:10.907204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:37.738 [2024-12-07 17:27:10.907217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:37.738 [2024-12-07 17:27:10.907361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.738 NewBaseBdev 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.738 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.739 [ 00:10:37.739 { 00:10:37.739 "name": "NewBaseBdev", 00:10:37.739 "aliases": [ 00:10:37.739 "a7ab9e15-3e62-4f13-9974-e93c70f2d82c" 00:10:37.739 ], 00:10:37.739 "product_name": "Malloc disk", 00:10:37.739 "block_size": 512, 00:10:37.739 "num_blocks": 65536, 00:10:37.739 "uuid": "a7ab9e15-3e62-4f13-9974-e93c70f2d82c", 00:10:37.739 "assigned_rate_limits": { 00:10:37.739 "rw_ios_per_sec": 0, 00:10:37.739 "rw_mbytes_per_sec": 0, 00:10:37.739 "r_mbytes_per_sec": 0, 00:10:37.739 "w_mbytes_per_sec": 0 00:10:37.739 }, 00:10:37.739 "claimed": true, 00:10:37.739 "claim_type": "exclusive_write", 00:10:37.739 "zoned": false, 00:10:37.739 "supported_io_types": { 00:10:37.739 "read": true, 00:10:37.739 "write": true, 00:10:37.739 "unmap": true, 00:10:37.739 "flush": true, 00:10:37.739 "reset": true, 00:10:37.739 "nvme_admin": false, 00:10:37.739 "nvme_io": false, 00:10:37.739 "nvme_io_md": false, 00:10:37.739 "write_zeroes": true, 00:10:37.739 "zcopy": true, 00:10:37.739 "get_zone_info": false, 00:10:37.739 "zone_management": false, 00:10:37.739 "zone_append": false, 00:10:37.739 "compare": false, 00:10:37.739 "compare_and_write": false, 00:10:37.739 "abort": true, 00:10:37.739 "seek_hole": false, 00:10:37.739 "seek_data": false, 00:10:37.739 "copy": true, 00:10:37.739 "nvme_iov_md": false 00:10:37.739 }, 00:10:37.739 "memory_domains": [ 00:10:37.739 { 00:10:37.739 "dma_device_id": "system", 00:10:37.739 "dma_device_type": 1 00:10:37.739 }, 00:10:37.739 { 00:10:37.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.739 "dma_device_type": 2 00:10:37.739 } 00:10:37.739 ], 00:10:37.739 "driver_specific": {} 00:10:37.739 } 00:10:37.739 ] 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.739 "name": "Existed_Raid", 00:10:37.739 "uuid": "ce205fac-9224-4817-8430-586ee56b4653", 00:10:37.739 "strip_size_kb": 64, 00:10:37.739 "state": "online", 00:10:37.739 "raid_level": "raid0", 00:10:37.739 "superblock": true, 00:10:37.739 "num_base_bdevs": 4, 00:10:37.739 "num_base_bdevs_discovered": 4, 00:10:37.739 "num_base_bdevs_operational": 4, 00:10:37.739 "base_bdevs_list": [ 00:10:37.739 { 00:10:37.739 "name": "NewBaseBdev", 00:10:37.739 "uuid": "a7ab9e15-3e62-4f13-9974-e93c70f2d82c", 00:10:37.739 "is_configured": true, 00:10:37.739 "data_offset": 2048, 00:10:37.739 "data_size": 63488 00:10:37.739 }, 00:10:37.739 { 00:10:37.739 "name": "BaseBdev2", 00:10:37.739 "uuid": "079b017b-0a4e-48e9-9a9f-b22b89fb62d7", 00:10:37.739 "is_configured": true, 00:10:37.739 "data_offset": 2048, 00:10:37.739 "data_size": 63488 00:10:37.739 }, 00:10:37.739 { 00:10:37.739 "name": "BaseBdev3", 00:10:37.739 "uuid": "8113a0ad-624f-4f95-a811-0f2527fbb5c0", 00:10:37.739 "is_configured": true, 00:10:37.739 "data_offset": 2048, 00:10:37.739 "data_size": 63488 00:10:37.739 }, 00:10:37.739 { 00:10:37.739 "name": "BaseBdev4", 00:10:37.739 "uuid": "50e79bd3-0d43-4d85-9a20-2e659309bff0", 00:10:37.739 "is_configured": true, 00:10:37.739 "data_offset": 2048, 00:10:37.739 "data_size": 63488 00:10:37.739 } 00:10:37.739 ] 00:10:37.739 }' 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.739 17:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.998 [2024-12-07 17:27:11.358295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.998 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.257 "name": "Existed_Raid", 00:10:38.257 "aliases": [ 00:10:38.257 "ce205fac-9224-4817-8430-586ee56b4653" 00:10:38.257 ], 00:10:38.257 "product_name": "Raid Volume", 00:10:38.257 "block_size": 512, 00:10:38.257 "num_blocks": 253952, 00:10:38.257 "uuid": "ce205fac-9224-4817-8430-586ee56b4653", 00:10:38.257 "assigned_rate_limits": { 00:10:38.257 "rw_ios_per_sec": 0, 00:10:38.257 "rw_mbytes_per_sec": 0, 00:10:38.257 "r_mbytes_per_sec": 0, 00:10:38.257 "w_mbytes_per_sec": 0 00:10:38.257 }, 00:10:38.257 "claimed": false, 00:10:38.257 "zoned": false, 00:10:38.257 "supported_io_types": { 00:10:38.257 "read": true, 00:10:38.257 "write": true, 00:10:38.257 "unmap": true, 00:10:38.257 "flush": true, 00:10:38.257 "reset": true, 00:10:38.257 "nvme_admin": false, 00:10:38.257 "nvme_io": false, 00:10:38.257 "nvme_io_md": false, 00:10:38.257 "write_zeroes": true, 00:10:38.257 "zcopy": false, 00:10:38.257 "get_zone_info": false, 00:10:38.257 "zone_management": false, 00:10:38.257 "zone_append": false, 00:10:38.257 "compare": false, 00:10:38.257 "compare_and_write": false, 00:10:38.257 "abort": false, 00:10:38.257 "seek_hole": false, 00:10:38.257 "seek_data": false, 00:10:38.257 "copy": false, 00:10:38.257 "nvme_iov_md": false 00:10:38.257 }, 00:10:38.257 "memory_domains": [ 00:10:38.257 { 00:10:38.257 "dma_device_id": "system", 00:10:38.257 "dma_device_type": 1 00:10:38.257 }, 00:10:38.257 { 00:10:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.257 "dma_device_type": 2 00:10:38.257 }, 00:10:38.257 { 00:10:38.257 "dma_device_id": "system", 00:10:38.257 "dma_device_type": 1 00:10:38.257 }, 00:10:38.257 { 00:10:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.257 "dma_device_type": 2 00:10:38.257 }, 00:10:38.257 { 00:10:38.257 "dma_device_id": "system", 00:10:38.257 "dma_device_type": 1 00:10:38.257 }, 00:10:38.257 { 00:10:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.257 "dma_device_type": 2 00:10:38.257 }, 00:10:38.257 { 00:10:38.257 "dma_device_id": "system", 00:10:38.257 "dma_device_type": 1 00:10:38.257 }, 00:10:38.257 { 00:10:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.257 "dma_device_type": 2 00:10:38.257 } 00:10:38.257 ], 00:10:38.257 "driver_specific": { 00:10:38.257 "raid": { 00:10:38.257 "uuid": "ce205fac-9224-4817-8430-586ee56b4653", 00:10:38.257 "strip_size_kb": 64, 00:10:38.257 "state": "online", 00:10:38.257 "raid_level": "raid0", 00:10:38.257 "superblock": true, 00:10:38.257 "num_base_bdevs": 4, 00:10:38.257 "num_base_bdevs_discovered": 4, 00:10:38.257 "num_base_bdevs_operational": 4, 00:10:38.257 "base_bdevs_list": [ 00:10:38.257 { 00:10:38.257 "name": "NewBaseBdev", 00:10:38.257 "uuid": "a7ab9e15-3e62-4f13-9974-e93c70f2d82c", 00:10:38.257 "is_configured": true, 00:10:38.257 "data_offset": 2048, 00:10:38.257 "data_size": 63488 00:10:38.257 }, 00:10:38.257 { 00:10:38.257 "name": "BaseBdev2", 00:10:38.257 "uuid": "079b017b-0a4e-48e9-9a9f-b22b89fb62d7", 00:10:38.257 "is_configured": true, 00:10:38.257 "data_offset": 2048, 00:10:38.257 "data_size": 63488 00:10:38.257 }, 00:10:38.257 { 00:10:38.257 "name": "BaseBdev3", 00:10:38.257 "uuid": "8113a0ad-624f-4f95-a811-0f2527fbb5c0", 00:10:38.257 "is_configured": true, 00:10:38.257 "data_offset": 2048, 00:10:38.257 "data_size": 63488 00:10:38.257 }, 00:10:38.257 { 00:10:38.257 "name": "BaseBdev4", 00:10:38.257 "uuid": "50e79bd3-0d43-4d85-9a20-2e659309bff0", 00:10:38.257 "is_configured": true, 00:10:38.257 "data_offset": 2048, 00:10:38.257 "data_size": 63488 00:10:38.257 } 00:10:38.257 ] 00:10:38.257 } 00:10:38.257 } 00:10:38.257 }' 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:38.257 BaseBdev2 00:10:38.257 BaseBdev3 00:10:38.257 BaseBdev4' 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.257 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.258 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.517 [2024-12-07 17:27:11.685297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.517 [2024-12-07 17:27:11.685436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.517 [2024-12-07 17:27:11.685551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.517 [2024-12-07 17:27:11.685649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.517 [2024-12-07 17:27:11.685695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70067 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70067 ']' 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70067 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70067 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70067' 00:10:38.517 killing process with pid 70067 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70067 00:10:38.517 [2024-12-07 17:27:11.735799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.517 17:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70067 00:10:39.084 [2024-12-07 17:27:12.164073] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.052 17:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:40.052 00:10:40.052 real 0m11.560s 00:10:40.052 user 0m18.081s 00:10:40.052 sys 0m2.069s 00:10:40.052 ************************************ 00:10:40.052 END TEST raid_state_function_test_sb 00:10:40.052 ************************************ 00:10:40.052 17:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.052 17:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.330 17:27:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:40.330 17:27:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:40.330 17:27:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.330 17:27:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.330 ************************************ 00:10:40.330 START TEST raid_superblock_test 00:10:40.330 ************************************ 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70739 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70739 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70739 ']' 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.330 17:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.330 [2024-12-07 17:27:13.547439] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:40.330 [2024-12-07 17:27:13.547630] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70739 ] 00:10:40.330 [2024-12-07 17:27:13.702000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.586 [2024-12-07 17:27:13.829875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.844 [2024-12-07 17:27:14.059116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.844 [2024-12-07 17:27:14.059279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.101 malloc1 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.101 [2024-12-07 17:27:14.434728] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:41.101 [2024-12-07 17:27:14.434861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.101 [2024-12-07 17:27:14.434905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:41.101 [2024-12-07 17:27:14.434942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.101 [2024-12-07 17:27:14.437296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.101 [2024-12-07 17:27:14.437369] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:41.101 pt1 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.101 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.359 malloc2 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.359 [2024-12-07 17:27:14.495503] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:41.359 [2024-12-07 17:27:14.495623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.359 [2024-12-07 17:27:14.495681] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:41.359 [2024-12-07 17:27:14.495713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.359 [2024-12-07 17:27:14.498101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.359 [2024-12-07 17:27:14.498171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:41.359 pt2 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.359 malloc3 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.359 [2024-12-07 17:27:14.570143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:41.359 [2024-12-07 17:27:14.570242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.359 [2024-12-07 17:27:14.570283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:41.359 [2024-12-07 17:27:14.570322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.359 [2024-12-07 17:27:14.572708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.359 [2024-12-07 17:27:14.572783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:41.359 pt3 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.359 malloc4 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.359 [2024-12-07 17:27:14.632570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:41.359 [2024-12-07 17:27:14.632720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.359 [2024-12-07 17:27:14.632747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:41.359 [2024-12-07 17:27:14.632758] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.359 [2024-12-07 17:27:14.635152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.359 [2024-12-07 17:27:14.635190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:41.359 pt4 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.359 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.359 [2024-12-07 17:27:14.644573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:41.359 [2024-12-07 17:27:14.646626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:41.359 [2024-12-07 17:27:14.646755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:41.359 [2024-12-07 17:27:14.646837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:41.359 [2024-12-07 17:27:14.647068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:41.359 [2024-12-07 17:27:14.647126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:41.359 [2024-12-07 17:27:14.647408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:41.359 [2024-12-07 17:27:14.647625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:41.360 [2024-12-07 17:27:14.647671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:41.360 [2024-12-07 17:27:14.647855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.360 "name": "raid_bdev1", 00:10:41.360 "uuid": "e5f3798f-8708-4005-908b-4f023a9e8a5a", 00:10:41.360 "strip_size_kb": 64, 00:10:41.360 "state": "online", 00:10:41.360 "raid_level": "raid0", 00:10:41.360 "superblock": true, 00:10:41.360 "num_base_bdevs": 4, 00:10:41.360 "num_base_bdevs_discovered": 4, 00:10:41.360 "num_base_bdevs_operational": 4, 00:10:41.360 "base_bdevs_list": [ 00:10:41.360 { 00:10:41.360 "name": "pt1", 00:10:41.360 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:41.360 "is_configured": true, 00:10:41.360 "data_offset": 2048, 00:10:41.360 "data_size": 63488 00:10:41.360 }, 00:10:41.360 { 00:10:41.360 "name": "pt2", 00:10:41.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.360 "is_configured": true, 00:10:41.360 "data_offset": 2048, 00:10:41.360 "data_size": 63488 00:10:41.360 }, 00:10:41.360 { 00:10:41.360 "name": "pt3", 00:10:41.360 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.360 "is_configured": true, 00:10:41.360 "data_offset": 2048, 00:10:41.360 "data_size": 63488 00:10:41.360 }, 00:10:41.360 { 00:10:41.360 "name": "pt4", 00:10:41.360 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:41.360 "is_configured": true, 00:10:41.360 "data_offset": 2048, 00:10:41.360 "data_size": 63488 00:10:41.360 } 00:10:41.360 ] 00:10:41.360 }' 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.360 17:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.928 [2024-12-07 17:27:15.056268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.928 "name": "raid_bdev1", 00:10:41.928 "aliases": [ 00:10:41.928 "e5f3798f-8708-4005-908b-4f023a9e8a5a" 00:10:41.928 ], 00:10:41.928 "product_name": "Raid Volume", 00:10:41.928 "block_size": 512, 00:10:41.928 "num_blocks": 253952, 00:10:41.928 "uuid": "e5f3798f-8708-4005-908b-4f023a9e8a5a", 00:10:41.928 "assigned_rate_limits": { 00:10:41.928 "rw_ios_per_sec": 0, 00:10:41.928 "rw_mbytes_per_sec": 0, 00:10:41.928 "r_mbytes_per_sec": 0, 00:10:41.928 "w_mbytes_per_sec": 0 00:10:41.928 }, 00:10:41.928 "claimed": false, 00:10:41.928 "zoned": false, 00:10:41.928 "supported_io_types": { 00:10:41.928 "read": true, 00:10:41.928 "write": true, 00:10:41.928 "unmap": true, 00:10:41.928 "flush": true, 00:10:41.928 "reset": true, 00:10:41.928 "nvme_admin": false, 00:10:41.928 "nvme_io": false, 00:10:41.928 "nvme_io_md": false, 00:10:41.928 "write_zeroes": true, 00:10:41.928 "zcopy": false, 00:10:41.928 "get_zone_info": false, 00:10:41.928 "zone_management": false, 00:10:41.928 "zone_append": false, 00:10:41.928 "compare": false, 00:10:41.928 "compare_and_write": false, 00:10:41.928 "abort": false, 00:10:41.928 "seek_hole": false, 00:10:41.928 "seek_data": false, 00:10:41.928 "copy": false, 00:10:41.928 "nvme_iov_md": false 00:10:41.928 }, 00:10:41.928 "memory_domains": [ 00:10:41.928 { 00:10:41.928 "dma_device_id": "system", 00:10:41.928 "dma_device_type": 1 00:10:41.928 }, 00:10:41.928 { 00:10:41.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.928 "dma_device_type": 2 00:10:41.928 }, 00:10:41.928 { 00:10:41.928 "dma_device_id": "system", 00:10:41.928 "dma_device_type": 1 00:10:41.928 }, 00:10:41.928 { 00:10:41.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.928 "dma_device_type": 2 00:10:41.928 }, 00:10:41.928 { 00:10:41.928 "dma_device_id": "system", 00:10:41.928 "dma_device_type": 1 00:10:41.928 }, 00:10:41.928 { 00:10:41.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.928 "dma_device_type": 2 00:10:41.928 }, 00:10:41.928 { 00:10:41.928 "dma_device_id": "system", 00:10:41.928 "dma_device_type": 1 00:10:41.928 }, 00:10:41.928 { 00:10:41.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.928 "dma_device_type": 2 00:10:41.928 } 00:10:41.928 ], 00:10:41.928 "driver_specific": { 00:10:41.928 "raid": { 00:10:41.928 "uuid": "e5f3798f-8708-4005-908b-4f023a9e8a5a", 00:10:41.928 "strip_size_kb": 64, 00:10:41.928 "state": "online", 00:10:41.928 "raid_level": "raid0", 00:10:41.928 "superblock": true, 00:10:41.928 "num_base_bdevs": 4, 00:10:41.928 "num_base_bdevs_discovered": 4, 00:10:41.928 "num_base_bdevs_operational": 4, 00:10:41.928 "base_bdevs_list": [ 00:10:41.928 { 00:10:41.928 "name": "pt1", 00:10:41.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:41.928 "is_configured": true, 00:10:41.928 "data_offset": 2048, 00:10:41.928 "data_size": 63488 00:10:41.928 }, 00:10:41.928 { 00:10:41.928 "name": "pt2", 00:10:41.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.928 "is_configured": true, 00:10:41.928 "data_offset": 2048, 00:10:41.928 "data_size": 63488 00:10:41.928 }, 00:10:41.928 { 00:10:41.928 "name": "pt3", 00:10:41.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.928 "is_configured": true, 00:10:41.928 "data_offset": 2048, 00:10:41.928 "data_size": 63488 00:10:41.928 }, 00:10:41.928 { 00:10:41.928 "name": "pt4", 00:10:41.928 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:41.928 "is_configured": true, 00:10:41.928 "data_offset": 2048, 00:10:41.928 "data_size": 63488 00:10:41.928 } 00:10:41.928 ] 00:10:41.928 } 00:10:41.928 } 00:10:41.928 }' 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:41.928 pt2 00:10:41.928 pt3 00:10:41.928 pt4' 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:41.928 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.929 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.929 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:42.186 [2024-12-07 17:27:15.407583] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e5f3798f-8708-4005-908b-4f023a9e8a5a 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e5f3798f-8708-4005-908b-4f023a9e8a5a ']' 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.186 [2024-12-07 17:27:15.455190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.186 [2024-12-07 17:27:15.455262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.186 [2024-12-07 17:27:15.455378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.186 [2024-12-07 17:27:15.455471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.186 [2024-12-07 17:27:15.455491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.186 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:42.444 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.444 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:42.444 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:42.444 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.445 [2024-12-07 17:27:15.623075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:42.445 [2024-12-07 17:27:15.625214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:42.445 [2024-12-07 17:27:15.625304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:42.445 [2024-12-07 17:27:15.625354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:42.445 [2024-12-07 17:27:15.625433] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:42.445 [2024-12-07 17:27:15.625511] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:42.445 [2024-12-07 17:27:15.625530] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:42.445 [2024-12-07 17:27:15.625548] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:42.445 [2024-12-07 17:27:15.625561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.445 [2024-12-07 17:27:15.625573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:42.445 request: 00:10:42.445 { 00:10:42.445 "name": "raid_bdev1", 00:10:42.445 "raid_level": "raid0", 00:10:42.445 "base_bdevs": [ 00:10:42.445 "malloc1", 00:10:42.445 "malloc2", 00:10:42.445 "malloc3", 00:10:42.445 "malloc4" 00:10:42.445 ], 00:10:42.445 "strip_size_kb": 64, 00:10:42.445 "superblock": false, 00:10:42.445 "method": "bdev_raid_create", 00:10:42.445 "req_id": 1 00:10:42.445 } 00:10:42.445 Got JSON-RPC error response 00:10:42.445 response: 00:10:42.445 { 00:10:42.445 "code": -17, 00:10:42.445 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:42.445 } 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.445 [2024-12-07 17:27:15.687036] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:42.445 [2024-12-07 17:27:15.687091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.445 [2024-12-07 17:27:15.687111] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:42.445 [2024-12-07 17:27:15.687123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.445 [2024-12-07 17:27:15.689525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.445 [2024-12-07 17:27:15.689566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:42.445 [2024-12-07 17:27:15.689639] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:42.445 [2024-12-07 17:27:15.689695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:42.445 pt1 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.445 "name": "raid_bdev1", 00:10:42.445 "uuid": "e5f3798f-8708-4005-908b-4f023a9e8a5a", 00:10:42.445 "strip_size_kb": 64, 00:10:42.445 "state": "configuring", 00:10:42.445 "raid_level": "raid0", 00:10:42.445 "superblock": true, 00:10:42.445 "num_base_bdevs": 4, 00:10:42.445 "num_base_bdevs_discovered": 1, 00:10:42.445 "num_base_bdevs_operational": 4, 00:10:42.445 "base_bdevs_list": [ 00:10:42.445 { 00:10:42.445 "name": "pt1", 00:10:42.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.445 "is_configured": true, 00:10:42.445 "data_offset": 2048, 00:10:42.445 "data_size": 63488 00:10:42.445 }, 00:10:42.445 { 00:10:42.445 "name": null, 00:10:42.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:42.445 "is_configured": false, 00:10:42.445 "data_offset": 2048, 00:10:42.445 "data_size": 63488 00:10:42.445 }, 00:10:42.445 { 00:10:42.445 "name": null, 00:10:42.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:42.445 "is_configured": false, 00:10:42.445 "data_offset": 2048, 00:10:42.445 "data_size": 63488 00:10:42.445 }, 00:10:42.445 { 00:10:42.445 "name": null, 00:10:42.445 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:42.445 "is_configured": false, 00:10:42.445 "data_offset": 2048, 00:10:42.445 "data_size": 63488 00:10:42.445 } 00:10:42.445 ] 00:10:42.445 }' 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.445 17:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.010 [2024-12-07 17:27:16.118363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:43.010 [2024-12-07 17:27:16.118567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.010 [2024-12-07 17:27:16.118612] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:43.010 [2024-12-07 17:27:16.118649] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.010 [2024-12-07 17:27:16.119254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.010 [2024-12-07 17:27:16.119329] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:43.010 [2024-12-07 17:27:16.119514] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:43.010 [2024-12-07 17:27:16.119574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:43.010 pt2 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.010 [2024-12-07 17:27:16.130305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.010 "name": "raid_bdev1", 00:10:43.010 "uuid": "e5f3798f-8708-4005-908b-4f023a9e8a5a", 00:10:43.010 "strip_size_kb": 64, 00:10:43.010 "state": "configuring", 00:10:43.010 "raid_level": "raid0", 00:10:43.010 "superblock": true, 00:10:43.010 "num_base_bdevs": 4, 00:10:43.010 "num_base_bdevs_discovered": 1, 00:10:43.010 "num_base_bdevs_operational": 4, 00:10:43.010 "base_bdevs_list": [ 00:10:43.010 { 00:10:43.010 "name": "pt1", 00:10:43.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.010 "is_configured": true, 00:10:43.010 "data_offset": 2048, 00:10:43.010 "data_size": 63488 00:10:43.010 }, 00:10:43.010 { 00:10:43.010 "name": null, 00:10:43.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.010 "is_configured": false, 00:10:43.010 "data_offset": 0, 00:10:43.010 "data_size": 63488 00:10:43.010 }, 00:10:43.010 { 00:10:43.010 "name": null, 00:10:43.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.010 "is_configured": false, 00:10:43.010 "data_offset": 2048, 00:10:43.010 "data_size": 63488 00:10:43.010 }, 00:10:43.010 { 00:10:43.010 "name": null, 00:10:43.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.010 "is_configured": false, 00:10:43.010 "data_offset": 2048, 00:10:43.010 "data_size": 63488 00:10:43.010 } 00:10:43.010 ] 00:10:43.010 }' 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.010 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.267 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:43.267 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:43.267 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:43.267 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.267 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.267 [2024-12-07 17:27:16.597547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:43.267 [2024-12-07 17:27:16.597742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.267 [2024-12-07 17:27:16.597784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:43.267 [2024-12-07 17:27:16.597817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.267 [2024-12-07 17:27:16.598392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.267 [2024-12-07 17:27:16.598419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:43.267 [2024-12-07 17:27:16.598524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:43.267 [2024-12-07 17:27:16.598553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:43.267 pt2 00:10:43.267 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.267 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:43.267 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:43.267 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.268 [2024-12-07 17:27:16.609482] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:43.268 [2024-12-07 17:27:16.609551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.268 [2024-12-07 17:27:16.609578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:43.268 [2024-12-07 17:27:16.609588] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.268 [2024-12-07 17:27:16.610116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.268 [2024-12-07 17:27:16.610143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:43.268 [2024-12-07 17:27:16.610238] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:43.268 [2024-12-07 17:27:16.610273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:43.268 pt3 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.268 [2024-12-07 17:27:16.621407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:43.268 [2024-12-07 17:27:16.621458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.268 [2024-12-07 17:27:16.621477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:43.268 [2024-12-07 17:27:16.621486] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.268 [2024-12-07 17:27:16.621905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.268 [2024-12-07 17:27:16.621920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:43.268 [2024-12-07 17:27:16.622005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:43.268 [2024-12-07 17:27:16.622029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:43.268 [2024-12-07 17:27:16.622173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:43.268 [2024-12-07 17:27:16.622182] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:43.268 [2024-12-07 17:27:16.622428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:43.268 [2024-12-07 17:27:16.622591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:43.268 [2024-12-07 17:27:16.622604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:43.268 [2024-12-07 17:27:16.622734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.268 pt4 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.268 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.525 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.525 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.525 "name": "raid_bdev1", 00:10:43.525 "uuid": "e5f3798f-8708-4005-908b-4f023a9e8a5a", 00:10:43.525 "strip_size_kb": 64, 00:10:43.525 "state": "online", 00:10:43.525 "raid_level": "raid0", 00:10:43.525 "superblock": true, 00:10:43.525 "num_base_bdevs": 4, 00:10:43.525 "num_base_bdevs_discovered": 4, 00:10:43.525 "num_base_bdevs_operational": 4, 00:10:43.525 "base_bdevs_list": [ 00:10:43.525 { 00:10:43.525 "name": "pt1", 00:10:43.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.525 "is_configured": true, 00:10:43.525 "data_offset": 2048, 00:10:43.525 "data_size": 63488 00:10:43.525 }, 00:10:43.525 { 00:10:43.525 "name": "pt2", 00:10:43.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.525 "is_configured": true, 00:10:43.525 "data_offset": 2048, 00:10:43.525 "data_size": 63488 00:10:43.525 }, 00:10:43.525 { 00:10:43.525 "name": "pt3", 00:10:43.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.525 "is_configured": true, 00:10:43.525 "data_offset": 2048, 00:10:43.525 "data_size": 63488 00:10:43.525 }, 00:10:43.525 { 00:10:43.525 "name": "pt4", 00:10:43.525 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.525 "is_configured": true, 00:10:43.525 "data_offset": 2048, 00:10:43.525 "data_size": 63488 00:10:43.525 } 00:10:43.525 ] 00:10:43.525 }' 00:10:43.525 17:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.525 17:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.782 [2024-12-07 17:27:17.049221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.782 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.782 "name": "raid_bdev1", 00:10:43.782 "aliases": [ 00:10:43.782 "e5f3798f-8708-4005-908b-4f023a9e8a5a" 00:10:43.782 ], 00:10:43.782 "product_name": "Raid Volume", 00:10:43.782 "block_size": 512, 00:10:43.782 "num_blocks": 253952, 00:10:43.782 "uuid": "e5f3798f-8708-4005-908b-4f023a9e8a5a", 00:10:43.782 "assigned_rate_limits": { 00:10:43.782 "rw_ios_per_sec": 0, 00:10:43.782 "rw_mbytes_per_sec": 0, 00:10:43.782 "r_mbytes_per_sec": 0, 00:10:43.782 "w_mbytes_per_sec": 0 00:10:43.782 }, 00:10:43.782 "claimed": false, 00:10:43.782 "zoned": false, 00:10:43.782 "supported_io_types": { 00:10:43.782 "read": true, 00:10:43.782 "write": true, 00:10:43.782 "unmap": true, 00:10:43.782 "flush": true, 00:10:43.782 "reset": true, 00:10:43.782 "nvme_admin": false, 00:10:43.782 "nvme_io": false, 00:10:43.782 "nvme_io_md": false, 00:10:43.782 "write_zeroes": true, 00:10:43.782 "zcopy": false, 00:10:43.782 "get_zone_info": false, 00:10:43.782 "zone_management": false, 00:10:43.782 "zone_append": false, 00:10:43.782 "compare": false, 00:10:43.782 "compare_and_write": false, 00:10:43.782 "abort": false, 00:10:43.782 "seek_hole": false, 00:10:43.782 "seek_data": false, 00:10:43.782 "copy": false, 00:10:43.782 "nvme_iov_md": false 00:10:43.782 }, 00:10:43.782 "memory_domains": [ 00:10:43.782 { 00:10:43.782 "dma_device_id": "system", 00:10:43.782 "dma_device_type": 1 00:10:43.782 }, 00:10:43.782 { 00:10:43.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.782 "dma_device_type": 2 00:10:43.782 }, 00:10:43.782 { 00:10:43.782 "dma_device_id": "system", 00:10:43.782 "dma_device_type": 1 00:10:43.782 }, 00:10:43.782 { 00:10:43.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.782 "dma_device_type": 2 00:10:43.782 }, 00:10:43.782 { 00:10:43.782 "dma_device_id": "system", 00:10:43.782 "dma_device_type": 1 00:10:43.782 }, 00:10:43.782 { 00:10:43.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.782 "dma_device_type": 2 00:10:43.782 }, 00:10:43.783 { 00:10:43.783 "dma_device_id": "system", 00:10:43.783 "dma_device_type": 1 00:10:43.783 }, 00:10:43.783 { 00:10:43.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.783 "dma_device_type": 2 00:10:43.783 } 00:10:43.783 ], 00:10:43.783 "driver_specific": { 00:10:43.783 "raid": { 00:10:43.783 "uuid": "e5f3798f-8708-4005-908b-4f023a9e8a5a", 00:10:43.783 "strip_size_kb": 64, 00:10:43.783 "state": "online", 00:10:43.783 "raid_level": "raid0", 00:10:43.783 "superblock": true, 00:10:43.783 "num_base_bdevs": 4, 00:10:43.783 "num_base_bdevs_discovered": 4, 00:10:43.783 "num_base_bdevs_operational": 4, 00:10:43.783 "base_bdevs_list": [ 00:10:43.783 { 00:10:43.783 "name": "pt1", 00:10:43.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.783 "is_configured": true, 00:10:43.783 "data_offset": 2048, 00:10:43.783 "data_size": 63488 00:10:43.783 }, 00:10:43.783 { 00:10:43.783 "name": "pt2", 00:10:43.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.783 "is_configured": true, 00:10:43.783 "data_offset": 2048, 00:10:43.783 "data_size": 63488 00:10:43.783 }, 00:10:43.783 { 00:10:43.783 "name": "pt3", 00:10:43.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.783 "is_configured": true, 00:10:43.783 "data_offset": 2048, 00:10:43.783 "data_size": 63488 00:10:43.783 }, 00:10:43.783 { 00:10:43.783 "name": "pt4", 00:10:43.783 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.783 "is_configured": true, 00:10:43.783 "data_offset": 2048, 00:10:43.783 "data_size": 63488 00:10:43.783 } 00:10:43.783 ] 00:10:43.783 } 00:10:43.783 } 00:10:43.783 }' 00:10:43.783 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.783 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:43.783 pt2 00:10:43.783 pt3 00:10:43.783 pt4' 00:10:43.783 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.040 [2024-12-07 17:27:17.372609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e5f3798f-8708-4005-908b-4f023a9e8a5a '!=' e5f3798f-8708-4005-908b-4f023a9e8a5a ']' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70739 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70739 ']' 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70739 00:10:44.040 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:44.298 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.298 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70739 00:10:44.298 killing process with pid 70739 00:10:44.298 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.298 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.298 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70739' 00:10:44.298 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70739 00:10:44.298 [2024-12-07 17:27:17.458216] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.298 [2024-12-07 17:27:17.458337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.298 17:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70739 00:10:44.298 [2024-12-07 17:27:17.458422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.298 [2024-12-07 17:27:17.458433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:44.559 [2024-12-07 17:27:17.897567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.934 17:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:45.934 00:10:45.934 real 0m5.673s 00:10:45.934 user 0m7.959s 00:10:45.934 sys 0m1.012s 00:10:45.934 17:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.934 17:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.934 ************************************ 00:10:45.934 END TEST raid_superblock_test 00:10:45.934 ************************************ 00:10:45.934 17:27:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:45.934 17:27:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:45.934 17:27:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.934 17:27:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.934 ************************************ 00:10:45.934 START TEST raid_read_error_test 00:10:45.934 ************************************ 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IkcsQAwVi5 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71003 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71003 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71003 ']' 00:10:45.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.934 17:27:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.934 [2024-12-07 17:27:19.308406] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:45.934 [2024-12-07 17:27:19.308521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71003 ] 00:10:46.193 [2024-12-07 17:27:19.480247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.451 [2024-12-07 17:27:19.615648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.709 [2024-12-07 17:27:19.855026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.709 [2024-12-07 17:27:19.855175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.968 BaseBdev1_malloc 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.968 true 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.968 [2024-12-07 17:27:20.193837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:46.968 [2024-12-07 17:27:20.194008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.968 [2024-12-07 17:27:20.194034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:46.968 [2024-12-07 17:27:20.194046] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.968 [2024-12-07 17:27:20.196360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.968 [2024-12-07 17:27:20.196402] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:46.968 BaseBdev1 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.968 BaseBdev2_malloc 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.968 true 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.968 [2024-12-07 17:27:20.269255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:46.968 [2024-12-07 17:27:20.269332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.968 [2024-12-07 17:27:20.269352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:46.968 [2024-12-07 17:27:20.269364] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.968 [2024-12-07 17:27:20.271884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.968 [2024-12-07 17:27:20.271923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:46.968 BaseBdev2 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.968 BaseBdev3_malloc 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.968 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.227 true 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.227 [2024-12-07 17:27:20.357070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:47.227 [2024-12-07 17:27:20.357131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.227 [2024-12-07 17:27:20.357150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:47.227 [2024-12-07 17:27:20.357162] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.227 [2024-12-07 17:27:20.359675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.227 [2024-12-07 17:27:20.359712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:47.227 BaseBdev3 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.227 BaseBdev4_malloc 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.227 true 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.227 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.227 [2024-12-07 17:27:20.432810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:47.227 [2024-12-07 17:27:20.432870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.227 [2024-12-07 17:27:20.432889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:47.227 [2024-12-07 17:27:20.432900] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.228 [2024-12-07 17:27:20.435362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.228 [2024-12-07 17:27:20.435400] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:47.228 BaseBdev4 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 [2024-12-07 17:27:20.444875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.228 [2024-12-07 17:27:20.447016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.228 [2024-12-07 17:27:20.447099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.228 [2024-12-07 17:27:20.447164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.228 [2024-12-07 17:27:20.447379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:47.228 [2024-12-07 17:27:20.447403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.228 [2024-12-07 17:27:20.447648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:47.228 [2024-12-07 17:27:20.447816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:47.228 [2024-12-07 17:27:20.447833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:47.228 [2024-12-07 17:27:20.448006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.228 "name": "raid_bdev1", 00:10:47.228 "uuid": "04a1fcc5-a4f5-4032-b371-6b281a4400bd", 00:10:47.228 "strip_size_kb": 64, 00:10:47.228 "state": "online", 00:10:47.228 "raid_level": "raid0", 00:10:47.228 "superblock": true, 00:10:47.228 "num_base_bdevs": 4, 00:10:47.228 "num_base_bdevs_discovered": 4, 00:10:47.228 "num_base_bdevs_operational": 4, 00:10:47.228 "base_bdevs_list": [ 00:10:47.228 { 00:10:47.228 "name": "BaseBdev1", 00:10:47.228 "uuid": "43aafe66-9fce-5b04-836b-89672d0c50f4", 00:10:47.228 "is_configured": true, 00:10:47.228 "data_offset": 2048, 00:10:47.228 "data_size": 63488 00:10:47.228 }, 00:10:47.228 { 00:10:47.228 "name": "BaseBdev2", 00:10:47.228 "uuid": "97844970-6465-5bfd-8360-081249a1aa4c", 00:10:47.228 "is_configured": true, 00:10:47.228 "data_offset": 2048, 00:10:47.228 "data_size": 63488 00:10:47.228 }, 00:10:47.228 { 00:10:47.228 "name": "BaseBdev3", 00:10:47.228 "uuid": "07d68178-b5b8-5cda-a708-c3b22997e3dc", 00:10:47.228 "is_configured": true, 00:10:47.228 "data_offset": 2048, 00:10:47.228 "data_size": 63488 00:10:47.228 }, 00:10:47.228 { 00:10:47.228 "name": "BaseBdev4", 00:10:47.228 "uuid": "94dd00c9-7fbd-5fd8-9ef0-2fcbe6c44a0b", 00:10:47.228 "is_configured": true, 00:10:47.228 "data_offset": 2048, 00:10:47.228 "data_size": 63488 00:10:47.228 } 00:10:47.228 ] 00:10:47.228 }' 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.228 17:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.793 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:47.793 17:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:47.793 [2024-12-07 17:27:20.985382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.728 "name": "raid_bdev1", 00:10:48.728 "uuid": "04a1fcc5-a4f5-4032-b371-6b281a4400bd", 00:10:48.728 "strip_size_kb": 64, 00:10:48.728 "state": "online", 00:10:48.728 "raid_level": "raid0", 00:10:48.728 "superblock": true, 00:10:48.728 "num_base_bdevs": 4, 00:10:48.728 "num_base_bdevs_discovered": 4, 00:10:48.728 "num_base_bdevs_operational": 4, 00:10:48.728 "base_bdevs_list": [ 00:10:48.728 { 00:10:48.728 "name": "BaseBdev1", 00:10:48.728 "uuid": "43aafe66-9fce-5b04-836b-89672d0c50f4", 00:10:48.728 "is_configured": true, 00:10:48.728 "data_offset": 2048, 00:10:48.728 "data_size": 63488 00:10:48.728 }, 00:10:48.728 { 00:10:48.728 "name": "BaseBdev2", 00:10:48.728 "uuid": "97844970-6465-5bfd-8360-081249a1aa4c", 00:10:48.728 "is_configured": true, 00:10:48.728 "data_offset": 2048, 00:10:48.728 "data_size": 63488 00:10:48.728 }, 00:10:48.728 { 00:10:48.728 "name": "BaseBdev3", 00:10:48.728 "uuid": "07d68178-b5b8-5cda-a708-c3b22997e3dc", 00:10:48.728 "is_configured": true, 00:10:48.728 "data_offset": 2048, 00:10:48.728 "data_size": 63488 00:10:48.728 }, 00:10:48.728 { 00:10:48.728 "name": "BaseBdev4", 00:10:48.728 "uuid": "94dd00c9-7fbd-5fd8-9ef0-2fcbe6c44a0b", 00:10:48.728 "is_configured": true, 00:10:48.728 "data_offset": 2048, 00:10:48.728 "data_size": 63488 00:10:48.728 } 00:10:48.728 ] 00:10:48.728 }' 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.728 17:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.987 17:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:48.987 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.987 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.246 [2024-12-07 17:27:22.371174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.246 [2024-12-07 17:27:22.371227] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.246 [2024-12-07 17:27:22.373951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.246 [2024-12-07 17:27:22.374021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.246 [2024-12-07 17:27:22.374071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.246 [2024-12-07 17:27:22.374085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:49.246 { 00:10:49.246 "results": [ 00:10:49.246 { 00:10:49.246 "job": "raid_bdev1", 00:10:49.246 "core_mask": "0x1", 00:10:49.246 "workload": "randrw", 00:10:49.246 "percentage": 50, 00:10:49.246 "status": "finished", 00:10:49.246 "queue_depth": 1, 00:10:49.246 "io_size": 131072, 00:10:49.246 "runtime": 1.386444, 00:10:49.246 "iops": 13160.286315206384, 00:10:49.246 "mibps": 1645.035789400798, 00:10:49.246 "io_failed": 1, 00:10:49.246 "io_timeout": 0, 00:10:49.246 "avg_latency_us": 107.11497593790018, 00:10:49.246 "min_latency_us": 26.047161572052403, 00:10:49.246 "max_latency_us": 1409.4532751091704 00:10:49.246 } 00:10:49.246 ], 00:10:49.246 "core_count": 1 00:10:49.246 } 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71003 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71003 ']' 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71003 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71003 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.246 killing process with pid 71003 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71003' 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71003 00:10:49.246 [2024-12-07 17:27:22.421057] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.246 17:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71003 00:10:49.505 [2024-12-07 17:27:22.784202] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.882 17:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IkcsQAwVi5 00:10:50.882 17:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:50.882 17:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:50.882 17:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:50.882 17:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:50.882 17:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.882 17:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:50.882 17:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:50.882 00:10:50.882 real 0m4.907s 00:10:50.882 user 0m5.642s 00:10:50.882 sys 0m0.669s 00:10:50.882 17:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.882 17:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.882 ************************************ 00:10:50.882 END TEST raid_read_error_test 00:10:50.882 ************************************ 00:10:50.882 17:27:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:50.882 17:27:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:50.882 17:27:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.882 17:27:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.882 ************************************ 00:10:50.882 START TEST raid_write_error_test 00:10:50.882 ************************************ 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aJ4ybcvSFv 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71151 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71151 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71151 ']' 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.882 17:27:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.141 [2024-12-07 17:27:24.295571] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:51.141 [2024-12-07 17:27:24.295693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71151 ] 00:10:51.141 [2024-12-07 17:27:24.473578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.399 [2024-12-07 17:27:24.615240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.658 [2024-12-07 17:27:24.853583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.658 [2024-12-07 17:27:24.853670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.916 BaseBdev1_malloc 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.916 true 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.916 [2024-12-07 17:27:25.182950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:51.916 [2024-12-07 17:27:25.183017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.916 [2024-12-07 17:27:25.183039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:51.916 [2024-12-07 17:27:25.183051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.916 [2024-12-07 17:27:25.185487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.916 [2024-12-07 17:27:25.185523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:51.916 BaseBdev1 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.916 BaseBdev2_malloc 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.916 true 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.916 [2024-12-07 17:27:25.258070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:51.916 [2024-12-07 17:27:25.258134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.916 [2024-12-07 17:27:25.258153] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:51.916 [2024-12-07 17:27:25.258177] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.916 [2024-12-07 17:27:25.260630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.916 [2024-12-07 17:27:25.260665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:51.916 BaseBdev2 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.916 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.174 BaseBdev3_malloc 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.174 true 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.174 [2024-12-07 17:27:25.343651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:52.174 [2024-12-07 17:27:25.343712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.174 [2024-12-07 17:27:25.343732] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:52.174 [2024-12-07 17:27:25.343745] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.174 [2024-12-07 17:27:25.346154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.174 [2024-12-07 17:27:25.346188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:52.174 BaseBdev3 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.174 BaseBdev4_malloc 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.174 true 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.174 [2024-12-07 17:27:25.418664] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:52.174 [2024-12-07 17:27:25.418724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.174 [2024-12-07 17:27:25.418743] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:52.174 [2024-12-07 17:27:25.418756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.174 [2024-12-07 17:27:25.421122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.174 [2024-12-07 17:27:25.421157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:52.174 BaseBdev4 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.174 [2024-12-07 17:27:25.430718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.174 [2024-12-07 17:27:25.432795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.174 [2024-12-07 17:27:25.432877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.174 [2024-12-07 17:27:25.432950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:52.174 [2024-12-07 17:27:25.433194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:52.174 [2024-12-07 17:27:25.433220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:52.174 [2024-12-07 17:27:25.433464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:52.174 [2024-12-07 17:27:25.433645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:52.174 [2024-12-07 17:27:25.433663] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:52.174 [2024-12-07 17:27:25.433827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.174 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.175 "name": "raid_bdev1", 00:10:52.175 "uuid": "050498ed-265e-49aa-8f2a-8282998ae36f", 00:10:52.175 "strip_size_kb": 64, 00:10:52.175 "state": "online", 00:10:52.175 "raid_level": "raid0", 00:10:52.175 "superblock": true, 00:10:52.175 "num_base_bdevs": 4, 00:10:52.175 "num_base_bdevs_discovered": 4, 00:10:52.175 "num_base_bdevs_operational": 4, 00:10:52.175 "base_bdevs_list": [ 00:10:52.175 { 00:10:52.175 "name": "BaseBdev1", 00:10:52.175 "uuid": "08950459-e55c-5b4a-b14f-11d3f2cea4d3", 00:10:52.175 "is_configured": true, 00:10:52.175 "data_offset": 2048, 00:10:52.175 "data_size": 63488 00:10:52.175 }, 00:10:52.175 { 00:10:52.175 "name": "BaseBdev2", 00:10:52.175 "uuid": "6a7a022d-b295-5fd1-a90f-71db3b0ca958", 00:10:52.175 "is_configured": true, 00:10:52.175 "data_offset": 2048, 00:10:52.175 "data_size": 63488 00:10:52.175 }, 00:10:52.175 { 00:10:52.175 "name": "BaseBdev3", 00:10:52.175 "uuid": "2f311ea6-adaf-5b4a-a9d6-10516e8f86f2", 00:10:52.175 "is_configured": true, 00:10:52.175 "data_offset": 2048, 00:10:52.175 "data_size": 63488 00:10:52.175 }, 00:10:52.175 { 00:10:52.175 "name": "BaseBdev4", 00:10:52.175 "uuid": "a9f4eaff-2b8f-5caf-a418-36194b1c8cd5", 00:10:52.175 "is_configured": true, 00:10:52.175 "data_offset": 2048, 00:10:52.175 "data_size": 63488 00:10:52.175 } 00:10:52.175 ] 00:10:52.175 }' 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.175 17:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.740 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:52.740 17:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:52.740 [2024-12-07 17:27:25.935198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.709 17:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.710 17:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.710 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.710 "name": "raid_bdev1", 00:10:53.710 "uuid": "050498ed-265e-49aa-8f2a-8282998ae36f", 00:10:53.710 "strip_size_kb": 64, 00:10:53.710 "state": "online", 00:10:53.710 "raid_level": "raid0", 00:10:53.710 "superblock": true, 00:10:53.710 "num_base_bdevs": 4, 00:10:53.710 "num_base_bdevs_discovered": 4, 00:10:53.710 "num_base_bdevs_operational": 4, 00:10:53.710 "base_bdevs_list": [ 00:10:53.710 { 00:10:53.710 "name": "BaseBdev1", 00:10:53.710 "uuid": "08950459-e55c-5b4a-b14f-11d3f2cea4d3", 00:10:53.710 "is_configured": true, 00:10:53.710 "data_offset": 2048, 00:10:53.710 "data_size": 63488 00:10:53.710 }, 00:10:53.710 { 00:10:53.710 "name": "BaseBdev2", 00:10:53.710 "uuid": "6a7a022d-b295-5fd1-a90f-71db3b0ca958", 00:10:53.710 "is_configured": true, 00:10:53.710 "data_offset": 2048, 00:10:53.710 "data_size": 63488 00:10:53.710 }, 00:10:53.710 { 00:10:53.710 "name": "BaseBdev3", 00:10:53.710 "uuid": "2f311ea6-adaf-5b4a-a9d6-10516e8f86f2", 00:10:53.710 "is_configured": true, 00:10:53.710 "data_offset": 2048, 00:10:53.710 "data_size": 63488 00:10:53.710 }, 00:10:53.710 { 00:10:53.710 "name": "BaseBdev4", 00:10:53.710 "uuid": "a9f4eaff-2b8f-5caf-a418-36194b1c8cd5", 00:10:53.710 "is_configured": true, 00:10:53.710 "data_offset": 2048, 00:10:53.710 "data_size": 63488 00:10:53.710 } 00:10:53.710 ] 00:10:53.710 }' 00:10:53.710 17:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.710 17:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.969 [2024-12-07 17:27:27.292137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.969 [2024-12-07 17:27:27.292193] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.969 [2024-12-07 17:27:27.294932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.969 [2024-12-07 17:27:27.295017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.969 [2024-12-07 17:27:27.295069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.969 [2024-12-07 17:27:27.295082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:53.969 { 00:10:53.969 "results": [ 00:10:53.969 { 00:10:53.969 "job": "raid_bdev1", 00:10:53.969 "core_mask": "0x1", 00:10:53.969 "workload": "randrw", 00:10:53.969 "percentage": 50, 00:10:53.969 "status": "finished", 00:10:53.969 "queue_depth": 1, 00:10:53.969 "io_size": 131072, 00:10:53.969 "runtime": 1.357562, 00:10:53.969 "iops": 13401.2295571031, 00:10:53.969 "mibps": 1675.1536946378876, 00:10:53.969 "io_failed": 1, 00:10:53.969 "io_timeout": 0, 00:10:53.969 "avg_latency_us": 104.94286892410906, 00:10:53.969 "min_latency_us": 26.494323144104804, 00:10:53.969 "max_latency_us": 1409.4532751091704 00:10:53.969 } 00:10:53.969 ], 00:10:53.969 "core_count": 1 00:10:53.969 } 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71151 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71151 ']' 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71151 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71151 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.969 killing process with pid 71151 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71151' 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71151 00:10:53.969 [2024-12-07 17:27:27.340684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.969 17:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71151 00:10:54.536 [2024-12-07 17:27:27.704725] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.909 17:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aJ4ybcvSFv 00:10:55.909 17:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:55.909 17:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:55.909 17:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:55.909 17:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:55.909 17:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.909 17:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:55.909 17:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:55.909 00:10:55.909 real 0m4.832s 00:10:55.909 user 0m5.513s 00:10:55.909 sys 0m0.677s 00:10:55.909 17:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.909 17:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.909 ************************************ 00:10:55.909 END TEST raid_write_error_test 00:10:55.909 ************************************ 00:10:55.909 17:27:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:55.909 17:27:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:55.909 17:27:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:55.909 17:27:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.909 17:27:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.909 ************************************ 00:10:55.909 START TEST raid_state_function_test 00:10:55.909 ************************************ 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71300 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71300' 00:10:55.909 Process raid pid: 71300 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71300 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71300 ']' 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.909 17:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.909 [2024-12-07 17:27:29.188832] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:55.909 [2024-12-07 17:27:29.188954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.168 [2024-12-07 17:27:29.364406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.168 [2024-12-07 17:27:29.503052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.426 [2024-12-07 17:27:29.741322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.426 [2024-12-07 17:27:29.741382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.685 [2024-12-07 17:27:30.016132] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.685 [2024-12-07 17:27:30.016205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.685 [2024-12-07 17:27:30.016216] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.685 [2024-12-07 17:27:30.016227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.685 [2024-12-07 17:27:30.016233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.685 [2024-12-07 17:27:30.016242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.685 [2024-12-07 17:27:30.016248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:56.685 [2024-12-07 17:27:30.016257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.685 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.944 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.944 "name": "Existed_Raid", 00:10:56.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.944 "strip_size_kb": 64, 00:10:56.944 "state": "configuring", 00:10:56.944 "raid_level": "concat", 00:10:56.944 "superblock": false, 00:10:56.944 "num_base_bdevs": 4, 00:10:56.944 "num_base_bdevs_discovered": 0, 00:10:56.944 "num_base_bdevs_operational": 4, 00:10:56.944 "base_bdevs_list": [ 00:10:56.944 { 00:10:56.944 "name": "BaseBdev1", 00:10:56.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.944 "is_configured": false, 00:10:56.944 "data_offset": 0, 00:10:56.944 "data_size": 0 00:10:56.944 }, 00:10:56.944 { 00:10:56.944 "name": "BaseBdev2", 00:10:56.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.944 "is_configured": false, 00:10:56.944 "data_offset": 0, 00:10:56.944 "data_size": 0 00:10:56.944 }, 00:10:56.944 { 00:10:56.944 "name": "BaseBdev3", 00:10:56.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.944 "is_configured": false, 00:10:56.944 "data_offset": 0, 00:10:56.944 "data_size": 0 00:10:56.944 }, 00:10:56.944 { 00:10:56.944 "name": "BaseBdev4", 00:10:56.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.944 "is_configured": false, 00:10:56.944 "data_offset": 0, 00:10:56.944 "data_size": 0 00:10:56.944 } 00:10:56.944 ] 00:10:56.944 }' 00:10:56.944 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.944 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.202 [2024-12-07 17:27:30.423427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.202 [2024-12-07 17:27:30.423485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.202 [2024-12-07 17:27:30.435359] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.202 [2024-12-07 17:27:30.435400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.202 [2024-12-07 17:27:30.435408] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.202 [2024-12-07 17:27:30.435417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.202 [2024-12-07 17:27:30.435423] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.202 [2024-12-07 17:27:30.435433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.202 [2024-12-07 17:27:30.435438] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.202 [2024-12-07 17:27:30.435446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.202 [2024-12-07 17:27:30.490976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.202 BaseBdev1 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.202 [ 00:10:57.202 { 00:10:57.202 "name": "BaseBdev1", 00:10:57.202 "aliases": [ 00:10:57.202 "35d688c0-7503-4377-9f73-05e6cd839723" 00:10:57.202 ], 00:10:57.202 "product_name": "Malloc disk", 00:10:57.202 "block_size": 512, 00:10:57.202 "num_blocks": 65536, 00:10:57.202 "uuid": "35d688c0-7503-4377-9f73-05e6cd839723", 00:10:57.202 "assigned_rate_limits": { 00:10:57.202 "rw_ios_per_sec": 0, 00:10:57.202 "rw_mbytes_per_sec": 0, 00:10:57.202 "r_mbytes_per_sec": 0, 00:10:57.202 "w_mbytes_per_sec": 0 00:10:57.202 }, 00:10:57.202 "claimed": true, 00:10:57.202 "claim_type": "exclusive_write", 00:10:57.202 "zoned": false, 00:10:57.202 "supported_io_types": { 00:10:57.202 "read": true, 00:10:57.202 "write": true, 00:10:57.202 "unmap": true, 00:10:57.202 "flush": true, 00:10:57.202 "reset": true, 00:10:57.202 "nvme_admin": false, 00:10:57.202 "nvme_io": false, 00:10:57.202 "nvme_io_md": false, 00:10:57.202 "write_zeroes": true, 00:10:57.202 "zcopy": true, 00:10:57.202 "get_zone_info": false, 00:10:57.202 "zone_management": false, 00:10:57.202 "zone_append": false, 00:10:57.202 "compare": false, 00:10:57.202 "compare_and_write": false, 00:10:57.202 "abort": true, 00:10:57.202 "seek_hole": false, 00:10:57.202 "seek_data": false, 00:10:57.202 "copy": true, 00:10:57.202 "nvme_iov_md": false 00:10:57.202 }, 00:10:57.202 "memory_domains": [ 00:10:57.202 { 00:10:57.202 "dma_device_id": "system", 00:10:57.202 "dma_device_type": 1 00:10:57.202 }, 00:10:57.202 { 00:10:57.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.202 "dma_device_type": 2 00:10:57.202 } 00:10:57.202 ], 00:10:57.202 "driver_specific": {} 00:10:57.202 } 00:10:57.202 ] 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.202 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.203 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.203 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.203 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.203 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.203 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.203 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.203 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.203 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.203 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.203 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.460 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.460 "name": "Existed_Raid", 00:10:57.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.460 "strip_size_kb": 64, 00:10:57.460 "state": "configuring", 00:10:57.460 "raid_level": "concat", 00:10:57.460 "superblock": false, 00:10:57.460 "num_base_bdevs": 4, 00:10:57.460 "num_base_bdevs_discovered": 1, 00:10:57.460 "num_base_bdevs_operational": 4, 00:10:57.460 "base_bdevs_list": [ 00:10:57.461 { 00:10:57.461 "name": "BaseBdev1", 00:10:57.461 "uuid": "35d688c0-7503-4377-9f73-05e6cd839723", 00:10:57.461 "is_configured": true, 00:10:57.461 "data_offset": 0, 00:10:57.461 "data_size": 65536 00:10:57.461 }, 00:10:57.461 { 00:10:57.461 "name": "BaseBdev2", 00:10:57.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.461 "is_configured": false, 00:10:57.461 "data_offset": 0, 00:10:57.461 "data_size": 0 00:10:57.461 }, 00:10:57.461 { 00:10:57.461 "name": "BaseBdev3", 00:10:57.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.461 "is_configured": false, 00:10:57.461 "data_offset": 0, 00:10:57.461 "data_size": 0 00:10:57.461 }, 00:10:57.461 { 00:10:57.461 "name": "BaseBdev4", 00:10:57.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.461 "is_configured": false, 00:10:57.461 "data_offset": 0, 00:10:57.461 "data_size": 0 00:10:57.461 } 00:10:57.461 ] 00:10:57.461 }' 00:10:57.461 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.461 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.719 [2024-12-07 17:27:30.966250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.719 [2024-12-07 17:27:30.966334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.719 [2024-12-07 17:27:30.978237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.719 [2024-12-07 17:27:30.980296] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.719 [2024-12-07 17:27:30.980353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.719 [2024-12-07 17:27:30.980364] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.719 [2024-12-07 17:27:30.980374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.719 [2024-12-07 17:27:30.980381] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.719 [2024-12-07 17:27:30.980389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.719 17:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.719 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.719 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.719 "name": "Existed_Raid", 00:10:57.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.720 "strip_size_kb": 64, 00:10:57.720 "state": "configuring", 00:10:57.720 "raid_level": "concat", 00:10:57.720 "superblock": false, 00:10:57.720 "num_base_bdevs": 4, 00:10:57.720 "num_base_bdevs_discovered": 1, 00:10:57.720 "num_base_bdevs_operational": 4, 00:10:57.720 "base_bdevs_list": [ 00:10:57.720 { 00:10:57.720 "name": "BaseBdev1", 00:10:57.720 "uuid": "35d688c0-7503-4377-9f73-05e6cd839723", 00:10:57.720 "is_configured": true, 00:10:57.720 "data_offset": 0, 00:10:57.720 "data_size": 65536 00:10:57.720 }, 00:10:57.720 { 00:10:57.720 "name": "BaseBdev2", 00:10:57.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.720 "is_configured": false, 00:10:57.720 "data_offset": 0, 00:10:57.720 "data_size": 0 00:10:57.720 }, 00:10:57.720 { 00:10:57.720 "name": "BaseBdev3", 00:10:57.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.720 "is_configured": false, 00:10:57.720 "data_offset": 0, 00:10:57.720 "data_size": 0 00:10:57.720 }, 00:10:57.720 { 00:10:57.720 "name": "BaseBdev4", 00:10:57.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.720 "is_configured": false, 00:10:57.720 "data_offset": 0, 00:10:57.720 "data_size": 0 00:10:57.720 } 00:10:57.720 ] 00:10:57.720 }' 00:10:57.720 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.720 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.286 [2024-12-07 17:27:31.445879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.286 BaseBdev2 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.286 [ 00:10:58.286 { 00:10:58.286 "name": "BaseBdev2", 00:10:58.286 "aliases": [ 00:10:58.286 "7bc12f23-eb97-4762-9df2-9238b10813ec" 00:10:58.286 ], 00:10:58.286 "product_name": "Malloc disk", 00:10:58.286 "block_size": 512, 00:10:58.286 "num_blocks": 65536, 00:10:58.286 "uuid": "7bc12f23-eb97-4762-9df2-9238b10813ec", 00:10:58.286 "assigned_rate_limits": { 00:10:58.286 "rw_ios_per_sec": 0, 00:10:58.286 "rw_mbytes_per_sec": 0, 00:10:58.286 "r_mbytes_per_sec": 0, 00:10:58.286 "w_mbytes_per_sec": 0 00:10:58.286 }, 00:10:58.286 "claimed": true, 00:10:58.286 "claim_type": "exclusive_write", 00:10:58.286 "zoned": false, 00:10:58.286 "supported_io_types": { 00:10:58.286 "read": true, 00:10:58.286 "write": true, 00:10:58.286 "unmap": true, 00:10:58.286 "flush": true, 00:10:58.286 "reset": true, 00:10:58.286 "nvme_admin": false, 00:10:58.286 "nvme_io": false, 00:10:58.286 "nvme_io_md": false, 00:10:58.286 "write_zeroes": true, 00:10:58.286 "zcopy": true, 00:10:58.286 "get_zone_info": false, 00:10:58.286 "zone_management": false, 00:10:58.286 "zone_append": false, 00:10:58.286 "compare": false, 00:10:58.286 "compare_and_write": false, 00:10:58.286 "abort": true, 00:10:58.286 "seek_hole": false, 00:10:58.286 "seek_data": false, 00:10:58.286 "copy": true, 00:10:58.286 "nvme_iov_md": false 00:10:58.286 }, 00:10:58.286 "memory_domains": [ 00:10:58.286 { 00:10:58.286 "dma_device_id": "system", 00:10:58.286 "dma_device_type": 1 00:10:58.286 }, 00:10:58.286 { 00:10:58.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.286 "dma_device_type": 2 00:10:58.286 } 00:10:58.286 ], 00:10:58.286 "driver_specific": {} 00:10:58.286 } 00:10:58.286 ] 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.286 "name": "Existed_Raid", 00:10:58.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.286 "strip_size_kb": 64, 00:10:58.286 "state": "configuring", 00:10:58.286 "raid_level": "concat", 00:10:58.286 "superblock": false, 00:10:58.286 "num_base_bdevs": 4, 00:10:58.286 "num_base_bdevs_discovered": 2, 00:10:58.286 "num_base_bdevs_operational": 4, 00:10:58.286 "base_bdevs_list": [ 00:10:58.286 { 00:10:58.286 "name": "BaseBdev1", 00:10:58.286 "uuid": "35d688c0-7503-4377-9f73-05e6cd839723", 00:10:58.286 "is_configured": true, 00:10:58.286 "data_offset": 0, 00:10:58.286 "data_size": 65536 00:10:58.286 }, 00:10:58.286 { 00:10:58.286 "name": "BaseBdev2", 00:10:58.286 "uuid": "7bc12f23-eb97-4762-9df2-9238b10813ec", 00:10:58.286 "is_configured": true, 00:10:58.286 "data_offset": 0, 00:10:58.286 "data_size": 65536 00:10:58.286 }, 00:10:58.286 { 00:10:58.286 "name": "BaseBdev3", 00:10:58.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.286 "is_configured": false, 00:10:58.286 "data_offset": 0, 00:10:58.286 "data_size": 0 00:10:58.286 }, 00:10:58.286 { 00:10:58.286 "name": "BaseBdev4", 00:10:58.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.286 "is_configured": false, 00:10:58.286 "data_offset": 0, 00:10:58.286 "data_size": 0 00:10:58.286 } 00:10:58.286 ] 00:10:58.286 }' 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.286 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.545 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:58.545 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.545 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.802 [2024-12-07 17:27:31.970906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.802 BaseBdev3 00:10:58.802 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.802 17:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.803 17:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.803 [ 00:10:58.803 { 00:10:58.803 "name": "BaseBdev3", 00:10:58.803 "aliases": [ 00:10:58.803 "a0c82000-bee3-4f99-bb25-50b6d1ffd66c" 00:10:58.803 ], 00:10:58.803 "product_name": "Malloc disk", 00:10:58.803 "block_size": 512, 00:10:58.803 "num_blocks": 65536, 00:10:58.803 "uuid": "a0c82000-bee3-4f99-bb25-50b6d1ffd66c", 00:10:58.803 "assigned_rate_limits": { 00:10:58.803 "rw_ios_per_sec": 0, 00:10:58.803 "rw_mbytes_per_sec": 0, 00:10:58.803 "r_mbytes_per_sec": 0, 00:10:58.803 "w_mbytes_per_sec": 0 00:10:58.803 }, 00:10:58.803 "claimed": true, 00:10:58.803 "claim_type": "exclusive_write", 00:10:58.803 "zoned": false, 00:10:58.803 "supported_io_types": { 00:10:58.803 "read": true, 00:10:58.803 "write": true, 00:10:58.803 "unmap": true, 00:10:58.803 "flush": true, 00:10:58.803 "reset": true, 00:10:58.803 "nvme_admin": false, 00:10:58.803 "nvme_io": false, 00:10:58.803 "nvme_io_md": false, 00:10:58.803 "write_zeroes": true, 00:10:58.803 "zcopy": true, 00:10:58.803 "get_zone_info": false, 00:10:58.803 "zone_management": false, 00:10:58.803 "zone_append": false, 00:10:58.803 "compare": false, 00:10:58.803 "compare_and_write": false, 00:10:58.803 "abort": true, 00:10:58.803 "seek_hole": false, 00:10:58.803 "seek_data": false, 00:10:58.803 "copy": true, 00:10:58.803 "nvme_iov_md": false 00:10:58.803 }, 00:10:58.803 "memory_domains": [ 00:10:58.803 { 00:10:58.803 "dma_device_id": "system", 00:10:58.803 "dma_device_type": 1 00:10:58.803 }, 00:10:58.803 { 00:10:58.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.803 "dma_device_type": 2 00:10:58.803 } 00:10:58.803 ], 00:10:58.803 "driver_specific": {} 00:10:58.803 } 00:10:58.803 ] 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.803 "name": "Existed_Raid", 00:10:58.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.803 "strip_size_kb": 64, 00:10:58.803 "state": "configuring", 00:10:58.803 "raid_level": "concat", 00:10:58.803 "superblock": false, 00:10:58.803 "num_base_bdevs": 4, 00:10:58.803 "num_base_bdevs_discovered": 3, 00:10:58.803 "num_base_bdevs_operational": 4, 00:10:58.803 "base_bdevs_list": [ 00:10:58.803 { 00:10:58.803 "name": "BaseBdev1", 00:10:58.803 "uuid": "35d688c0-7503-4377-9f73-05e6cd839723", 00:10:58.803 "is_configured": true, 00:10:58.803 "data_offset": 0, 00:10:58.803 "data_size": 65536 00:10:58.803 }, 00:10:58.803 { 00:10:58.803 "name": "BaseBdev2", 00:10:58.803 "uuid": "7bc12f23-eb97-4762-9df2-9238b10813ec", 00:10:58.803 "is_configured": true, 00:10:58.803 "data_offset": 0, 00:10:58.803 "data_size": 65536 00:10:58.803 }, 00:10:58.803 { 00:10:58.803 "name": "BaseBdev3", 00:10:58.803 "uuid": "a0c82000-bee3-4f99-bb25-50b6d1ffd66c", 00:10:58.803 "is_configured": true, 00:10:58.803 "data_offset": 0, 00:10:58.803 "data_size": 65536 00:10:58.803 }, 00:10:58.803 { 00:10:58.803 "name": "BaseBdev4", 00:10:58.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.803 "is_configured": false, 00:10:58.803 "data_offset": 0, 00:10:58.803 "data_size": 0 00:10:58.803 } 00:10:58.803 ] 00:10:58.803 }' 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.803 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.061 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:59.061 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.061 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.319 [2024-12-07 17:27:32.474208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.319 [2024-12-07 17:27:32.474267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:59.319 [2024-12-07 17:27:32.474276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:59.319 [2024-12-07 17:27:32.474565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:59.319 [2024-12-07 17:27:32.474757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:59.319 [2024-12-07 17:27:32.474780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:59.319 [2024-12-07 17:27:32.475094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.319 BaseBdev4 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.319 [ 00:10:59.319 { 00:10:59.319 "name": "BaseBdev4", 00:10:59.319 "aliases": [ 00:10:59.319 "ad0a9494-0d38-40dd-ae12-d42d6799af44" 00:10:59.319 ], 00:10:59.319 "product_name": "Malloc disk", 00:10:59.319 "block_size": 512, 00:10:59.319 "num_blocks": 65536, 00:10:59.319 "uuid": "ad0a9494-0d38-40dd-ae12-d42d6799af44", 00:10:59.319 "assigned_rate_limits": { 00:10:59.319 "rw_ios_per_sec": 0, 00:10:59.319 "rw_mbytes_per_sec": 0, 00:10:59.319 "r_mbytes_per_sec": 0, 00:10:59.319 "w_mbytes_per_sec": 0 00:10:59.319 }, 00:10:59.319 "claimed": true, 00:10:59.319 "claim_type": "exclusive_write", 00:10:59.319 "zoned": false, 00:10:59.319 "supported_io_types": { 00:10:59.319 "read": true, 00:10:59.319 "write": true, 00:10:59.319 "unmap": true, 00:10:59.319 "flush": true, 00:10:59.319 "reset": true, 00:10:59.319 "nvme_admin": false, 00:10:59.319 "nvme_io": false, 00:10:59.319 "nvme_io_md": false, 00:10:59.319 "write_zeroes": true, 00:10:59.319 "zcopy": true, 00:10:59.319 "get_zone_info": false, 00:10:59.319 "zone_management": false, 00:10:59.319 "zone_append": false, 00:10:59.319 "compare": false, 00:10:59.319 "compare_and_write": false, 00:10:59.319 "abort": true, 00:10:59.319 "seek_hole": false, 00:10:59.319 "seek_data": false, 00:10:59.319 "copy": true, 00:10:59.319 "nvme_iov_md": false 00:10:59.319 }, 00:10:59.319 "memory_domains": [ 00:10:59.319 { 00:10:59.319 "dma_device_id": "system", 00:10:59.319 "dma_device_type": 1 00:10:59.319 }, 00:10:59.319 { 00:10:59.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.319 "dma_device_type": 2 00:10:59.319 } 00:10:59.319 ], 00:10:59.319 "driver_specific": {} 00:10:59.319 } 00:10:59.319 ] 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.319 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.320 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.320 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.320 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.320 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.320 "name": "Existed_Raid", 00:10:59.320 "uuid": "5839066d-4810-419c-ab04-ad400002ec66", 00:10:59.320 "strip_size_kb": 64, 00:10:59.320 "state": "online", 00:10:59.320 "raid_level": "concat", 00:10:59.320 "superblock": false, 00:10:59.320 "num_base_bdevs": 4, 00:10:59.320 "num_base_bdevs_discovered": 4, 00:10:59.320 "num_base_bdevs_operational": 4, 00:10:59.320 "base_bdevs_list": [ 00:10:59.320 { 00:10:59.320 "name": "BaseBdev1", 00:10:59.320 "uuid": "35d688c0-7503-4377-9f73-05e6cd839723", 00:10:59.320 "is_configured": true, 00:10:59.320 "data_offset": 0, 00:10:59.320 "data_size": 65536 00:10:59.320 }, 00:10:59.320 { 00:10:59.320 "name": "BaseBdev2", 00:10:59.320 "uuid": "7bc12f23-eb97-4762-9df2-9238b10813ec", 00:10:59.320 "is_configured": true, 00:10:59.320 "data_offset": 0, 00:10:59.320 "data_size": 65536 00:10:59.320 }, 00:10:59.320 { 00:10:59.320 "name": "BaseBdev3", 00:10:59.320 "uuid": "a0c82000-bee3-4f99-bb25-50b6d1ffd66c", 00:10:59.320 "is_configured": true, 00:10:59.320 "data_offset": 0, 00:10:59.320 "data_size": 65536 00:10:59.320 }, 00:10:59.320 { 00:10:59.320 "name": "BaseBdev4", 00:10:59.320 "uuid": "ad0a9494-0d38-40dd-ae12-d42d6799af44", 00:10:59.320 "is_configured": true, 00:10:59.320 "data_offset": 0, 00:10:59.320 "data_size": 65536 00:10:59.320 } 00:10:59.320 ] 00:10:59.320 }' 00:10:59.320 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.320 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.583 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:59.583 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:59.583 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.583 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.583 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.583 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.583 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.583 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:59.583 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.583 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.583 [2024-12-07 17:27:32.949883] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.842 17:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.842 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.842 "name": "Existed_Raid", 00:10:59.842 "aliases": [ 00:10:59.842 "5839066d-4810-419c-ab04-ad400002ec66" 00:10:59.842 ], 00:10:59.843 "product_name": "Raid Volume", 00:10:59.843 "block_size": 512, 00:10:59.843 "num_blocks": 262144, 00:10:59.843 "uuid": "5839066d-4810-419c-ab04-ad400002ec66", 00:10:59.843 "assigned_rate_limits": { 00:10:59.843 "rw_ios_per_sec": 0, 00:10:59.843 "rw_mbytes_per_sec": 0, 00:10:59.843 "r_mbytes_per_sec": 0, 00:10:59.843 "w_mbytes_per_sec": 0 00:10:59.843 }, 00:10:59.843 "claimed": false, 00:10:59.843 "zoned": false, 00:10:59.843 "supported_io_types": { 00:10:59.843 "read": true, 00:10:59.843 "write": true, 00:10:59.843 "unmap": true, 00:10:59.843 "flush": true, 00:10:59.843 "reset": true, 00:10:59.843 "nvme_admin": false, 00:10:59.843 "nvme_io": false, 00:10:59.843 "nvme_io_md": false, 00:10:59.843 "write_zeroes": true, 00:10:59.843 "zcopy": false, 00:10:59.843 "get_zone_info": false, 00:10:59.843 "zone_management": false, 00:10:59.843 "zone_append": false, 00:10:59.843 "compare": false, 00:10:59.843 "compare_and_write": false, 00:10:59.843 "abort": false, 00:10:59.843 "seek_hole": false, 00:10:59.843 "seek_data": false, 00:10:59.843 "copy": false, 00:10:59.843 "nvme_iov_md": false 00:10:59.843 }, 00:10:59.843 "memory_domains": [ 00:10:59.843 { 00:10:59.843 "dma_device_id": "system", 00:10:59.843 "dma_device_type": 1 00:10:59.843 }, 00:10:59.843 { 00:10:59.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.843 "dma_device_type": 2 00:10:59.843 }, 00:10:59.843 { 00:10:59.843 "dma_device_id": "system", 00:10:59.843 "dma_device_type": 1 00:10:59.843 }, 00:10:59.843 { 00:10:59.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.843 "dma_device_type": 2 00:10:59.843 }, 00:10:59.843 { 00:10:59.843 "dma_device_id": "system", 00:10:59.843 "dma_device_type": 1 00:10:59.843 }, 00:10:59.843 { 00:10:59.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.843 "dma_device_type": 2 00:10:59.843 }, 00:10:59.843 { 00:10:59.843 "dma_device_id": "system", 00:10:59.843 "dma_device_type": 1 00:10:59.843 }, 00:10:59.843 { 00:10:59.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.843 "dma_device_type": 2 00:10:59.843 } 00:10:59.843 ], 00:10:59.843 "driver_specific": { 00:10:59.843 "raid": { 00:10:59.843 "uuid": "5839066d-4810-419c-ab04-ad400002ec66", 00:10:59.843 "strip_size_kb": 64, 00:10:59.843 "state": "online", 00:10:59.843 "raid_level": "concat", 00:10:59.843 "superblock": false, 00:10:59.843 "num_base_bdevs": 4, 00:10:59.843 "num_base_bdevs_discovered": 4, 00:10:59.843 "num_base_bdevs_operational": 4, 00:10:59.843 "base_bdevs_list": [ 00:10:59.843 { 00:10:59.843 "name": "BaseBdev1", 00:10:59.843 "uuid": "35d688c0-7503-4377-9f73-05e6cd839723", 00:10:59.843 "is_configured": true, 00:10:59.843 "data_offset": 0, 00:10:59.843 "data_size": 65536 00:10:59.843 }, 00:10:59.843 { 00:10:59.843 "name": "BaseBdev2", 00:10:59.843 "uuid": "7bc12f23-eb97-4762-9df2-9238b10813ec", 00:10:59.843 "is_configured": true, 00:10:59.843 "data_offset": 0, 00:10:59.843 "data_size": 65536 00:10:59.843 }, 00:10:59.843 { 00:10:59.843 "name": "BaseBdev3", 00:10:59.843 "uuid": "a0c82000-bee3-4f99-bb25-50b6d1ffd66c", 00:10:59.843 "is_configured": true, 00:10:59.843 "data_offset": 0, 00:10:59.843 "data_size": 65536 00:10:59.843 }, 00:10:59.843 { 00:10:59.843 "name": "BaseBdev4", 00:10:59.843 "uuid": "ad0a9494-0d38-40dd-ae12-d42d6799af44", 00:10:59.843 "is_configured": true, 00:10:59.843 "data_offset": 0, 00:10:59.843 "data_size": 65536 00:10:59.843 } 00:10:59.843 ] 00:10:59.843 } 00:10:59.843 } 00:10:59.843 }' 00:10:59.843 17:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:59.843 BaseBdev2 00:10:59.843 BaseBdev3 00:10:59.843 BaseBdev4' 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.843 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.844 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.844 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:59.844 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.844 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.844 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.844 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.102 [2024-12-07 17:27:33.237094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.102 [2024-12-07 17:27:33.237140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.102 [2024-12-07 17:27:33.237197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.102 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.102 "name": "Existed_Raid", 00:11:00.102 "uuid": "5839066d-4810-419c-ab04-ad400002ec66", 00:11:00.102 "strip_size_kb": 64, 00:11:00.102 "state": "offline", 00:11:00.102 "raid_level": "concat", 00:11:00.102 "superblock": false, 00:11:00.102 "num_base_bdevs": 4, 00:11:00.102 "num_base_bdevs_discovered": 3, 00:11:00.102 "num_base_bdevs_operational": 3, 00:11:00.102 "base_bdevs_list": [ 00:11:00.102 { 00:11:00.102 "name": null, 00:11:00.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.102 "is_configured": false, 00:11:00.102 "data_offset": 0, 00:11:00.102 "data_size": 65536 00:11:00.102 }, 00:11:00.102 { 00:11:00.102 "name": "BaseBdev2", 00:11:00.102 "uuid": "7bc12f23-eb97-4762-9df2-9238b10813ec", 00:11:00.102 "is_configured": true, 00:11:00.102 "data_offset": 0, 00:11:00.102 "data_size": 65536 00:11:00.102 }, 00:11:00.102 { 00:11:00.102 "name": "BaseBdev3", 00:11:00.102 "uuid": "a0c82000-bee3-4f99-bb25-50b6d1ffd66c", 00:11:00.102 "is_configured": true, 00:11:00.102 "data_offset": 0, 00:11:00.102 "data_size": 65536 00:11:00.102 }, 00:11:00.102 { 00:11:00.102 "name": "BaseBdev4", 00:11:00.102 "uuid": "ad0a9494-0d38-40dd-ae12-d42d6799af44", 00:11:00.102 "is_configured": true, 00:11:00.102 "data_offset": 0, 00:11:00.103 "data_size": 65536 00:11:00.103 } 00:11:00.103 ] 00:11:00.103 }' 00:11:00.103 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.103 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.668 [2024-12-07 17:27:33.847246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.668 17:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.668 [2024-12-07 17:27:34.008235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:00.926 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.926 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:00.926 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.926 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.927 [2024-12-07 17:27:34.167278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:00.927 [2024-12-07 17:27:34.167421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.927 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.184 BaseBdev2 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.184 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.184 [ 00:11:01.184 { 00:11:01.184 "name": "BaseBdev2", 00:11:01.184 "aliases": [ 00:11:01.184 "2f1872f3-44d2-4730-86ec-9cc275c3af47" 00:11:01.184 ], 00:11:01.184 "product_name": "Malloc disk", 00:11:01.184 "block_size": 512, 00:11:01.184 "num_blocks": 65536, 00:11:01.184 "uuid": "2f1872f3-44d2-4730-86ec-9cc275c3af47", 00:11:01.184 "assigned_rate_limits": { 00:11:01.184 "rw_ios_per_sec": 0, 00:11:01.184 "rw_mbytes_per_sec": 0, 00:11:01.184 "r_mbytes_per_sec": 0, 00:11:01.184 "w_mbytes_per_sec": 0 00:11:01.184 }, 00:11:01.184 "claimed": false, 00:11:01.184 "zoned": false, 00:11:01.184 "supported_io_types": { 00:11:01.184 "read": true, 00:11:01.184 "write": true, 00:11:01.184 "unmap": true, 00:11:01.184 "flush": true, 00:11:01.184 "reset": true, 00:11:01.184 "nvme_admin": false, 00:11:01.184 "nvme_io": false, 00:11:01.185 "nvme_io_md": false, 00:11:01.185 "write_zeroes": true, 00:11:01.185 "zcopy": true, 00:11:01.185 "get_zone_info": false, 00:11:01.185 "zone_management": false, 00:11:01.185 "zone_append": false, 00:11:01.185 "compare": false, 00:11:01.185 "compare_and_write": false, 00:11:01.185 "abort": true, 00:11:01.185 "seek_hole": false, 00:11:01.185 "seek_data": false, 00:11:01.185 "copy": true, 00:11:01.185 "nvme_iov_md": false 00:11:01.185 }, 00:11:01.185 "memory_domains": [ 00:11:01.185 { 00:11:01.185 "dma_device_id": "system", 00:11:01.185 "dma_device_type": 1 00:11:01.185 }, 00:11:01.185 { 00:11:01.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.185 "dma_device_type": 2 00:11:01.185 } 00:11:01.185 ], 00:11:01.185 "driver_specific": {} 00:11:01.185 } 00:11:01.185 ] 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.185 BaseBdev3 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.185 [ 00:11:01.185 { 00:11:01.185 "name": "BaseBdev3", 00:11:01.185 "aliases": [ 00:11:01.185 "117ca057-c815-4130-aa87-fde134d4b142" 00:11:01.185 ], 00:11:01.185 "product_name": "Malloc disk", 00:11:01.185 "block_size": 512, 00:11:01.185 "num_blocks": 65536, 00:11:01.185 "uuid": "117ca057-c815-4130-aa87-fde134d4b142", 00:11:01.185 "assigned_rate_limits": { 00:11:01.185 "rw_ios_per_sec": 0, 00:11:01.185 "rw_mbytes_per_sec": 0, 00:11:01.185 "r_mbytes_per_sec": 0, 00:11:01.185 "w_mbytes_per_sec": 0 00:11:01.185 }, 00:11:01.185 "claimed": false, 00:11:01.185 "zoned": false, 00:11:01.185 "supported_io_types": { 00:11:01.185 "read": true, 00:11:01.185 "write": true, 00:11:01.185 "unmap": true, 00:11:01.185 "flush": true, 00:11:01.185 "reset": true, 00:11:01.185 "nvme_admin": false, 00:11:01.185 "nvme_io": false, 00:11:01.185 "nvme_io_md": false, 00:11:01.185 "write_zeroes": true, 00:11:01.185 "zcopy": true, 00:11:01.185 "get_zone_info": false, 00:11:01.185 "zone_management": false, 00:11:01.185 "zone_append": false, 00:11:01.185 "compare": false, 00:11:01.185 "compare_and_write": false, 00:11:01.185 "abort": true, 00:11:01.185 "seek_hole": false, 00:11:01.185 "seek_data": false, 00:11:01.185 "copy": true, 00:11:01.185 "nvme_iov_md": false 00:11:01.185 }, 00:11:01.185 "memory_domains": [ 00:11:01.185 { 00:11:01.185 "dma_device_id": "system", 00:11:01.185 "dma_device_type": 1 00:11:01.185 }, 00:11:01.185 { 00:11:01.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.185 "dma_device_type": 2 00:11:01.185 } 00:11:01.185 ], 00:11:01.185 "driver_specific": {} 00:11:01.185 } 00:11:01.185 ] 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.185 BaseBdev4 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.185 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.443 [ 00:11:01.443 { 00:11:01.443 "name": "BaseBdev4", 00:11:01.443 "aliases": [ 00:11:01.443 "71d7be7c-512d-49a4-bf92-48ee589c74cd" 00:11:01.443 ], 00:11:01.443 "product_name": "Malloc disk", 00:11:01.443 "block_size": 512, 00:11:01.443 "num_blocks": 65536, 00:11:01.443 "uuid": "71d7be7c-512d-49a4-bf92-48ee589c74cd", 00:11:01.443 "assigned_rate_limits": { 00:11:01.443 "rw_ios_per_sec": 0, 00:11:01.443 "rw_mbytes_per_sec": 0, 00:11:01.443 "r_mbytes_per_sec": 0, 00:11:01.443 "w_mbytes_per_sec": 0 00:11:01.443 }, 00:11:01.443 "claimed": false, 00:11:01.443 "zoned": false, 00:11:01.443 "supported_io_types": { 00:11:01.443 "read": true, 00:11:01.443 "write": true, 00:11:01.443 "unmap": true, 00:11:01.443 "flush": true, 00:11:01.443 "reset": true, 00:11:01.443 "nvme_admin": false, 00:11:01.443 "nvme_io": false, 00:11:01.443 "nvme_io_md": false, 00:11:01.443 "write_zeroes": true, 00:11:01.443 "zcopy": true, 00:11:01.443 "get_zone_info": false, 00:11:01.443 "zone_management": false, 00:11:01.443 "zone_append": false, 00:11:01.443 "compare": false, 00:11:01.443 "compare_and_write": false, 00:11:01.443 "abort": true, 00:11:01.443 "seek_hole": false, 00:11:01.443 "seek_data": false, 00:11:01.443 "copy": true, 00:11:01.443 "nvme_iov_md": false 00:11:01.443 }, 00:11:01.443 "memory_domains": [ 00:11:01.443 { 00:11:01.443 "dma_device_id": "system", 00:11:01.443 "dma_device_type": 1 00:11:01.443 }, 00:11:01.443 { 00:11:01.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.443 "dma_device_type": 2 00:11:01.443 } 00:11:01.443 ], 00:11:01.443 "driver_specific": {} 00:11:01.443 } 00:11:01.443 ] 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.443 [2024-12-07 17:27:34.585436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.443 [2024-12-07 17:27:34.585566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.443 [2024-12-07 17:27:34.585620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.443 [2024-12-07 17:27:34.587808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.443 [2024-12-07 17:27:34.587906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.443 "name": "Existed_Raid", 00:11:01.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.443 "strip_size_kb": 64, 00:11:01.443 "state": "configuring", 00:11:01.443 "raid_level": "concat", 00:11:01.443 "superblock": false, 00:11:01.443 "num_base_bdevs": 4, 00:11:01.443 "num_base_bdevs_discovered": 3, 00:11:01.443 "num_base_bdevs_operational": 4, 00:11:01.443 "base_bdevs_list": [ 00:11:01.443 { 00:11:01.443 "name": "BaseBdev1", 00:11:01.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.443 "is_configured": false, 00:11:01.443 "data_offset": 0, 00:11:01.443 "data_size": 0 00:11:01.443 }, 00:11:01.443 { 00:11:01.443 "name": "BaseBdev2", 00:11:01.443 "uuid": "2f1872f3-44d2-4730-86ec-9cc275c3af47", 00:11:01.443 "is_configured": true, 00:11:01.443 "data_offset": 0, 00:11:01.443 "data_size": 65536 00:11:01.443 }, 00:11:01.443 { 00:11:01.443 "name": "BaseBdev3", 00:11:01.443 "uuid": "117ca057-c815-4130-aa87-fde134d4b142", 00:11:01.443 "is_configured": true, 00:11:01.443 "data_offset": 0, 00:11:01.443 "data_size": 65536 00:11:01.443 }, 00:11:01.443 { 00:11:01.443 "name": "BaseBdev4", 00:11:01.443 "uuid": "71d7be7c-512d-49a4-bf92-48ee589c74cd", 00:11:01.443 "is_configured": true, 00:11:01.443 "data_offset": 0, 00:11:01.443 "data_size": 65536 00:11:01.443 } 00:11:01.443 ] 00:11:01.443 }' 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.443 17:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.734 [2024-12-07 17:27:35.036722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.734 "name": "Existed_Raid", 00:11:01.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.734 "strip_size_kb": 64, 00:11:01.734 "state": "configuring", 00:11:01.734 "raid_level": "concat", 00:11:01.734 "superblock": false, 00:11:01.734 "num_base_bdevs": 4, 00:11:01.734 "num_base_bdevs_discovered": 2, 00:11:01.734 "num_base_bdevs_operational": 4, 00:11:01.734 "base_bdevs_list": [ 00:11:01.734 { 00:11:01.734 "name": "BaseBdev1", 00:11:01.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.734 "is_configured": false, 00:11:01.734 "data_offset": 0, 00:11:01.734 "data_size": 0 00:11:01.734 }, 00:11:01.734 { 00:11:01.734 "name": null, 00:11:01.734 "uuid": "2f1872f3-44d2-4730-86ec-9cc275c3af47", 00:11:01.734 "is_configured": false, 00:11:01.734 "data_offset": 0, 00:11:01.734 "data_size": 65536 00:11:01.734 }, 00:11:01.734 { 00:11:01.734 "name": "BaseBdev3", 00:11:01.734 "uuid": "117ca057-c815-4130-aa87-fde134d4b142", 00:11:01.734 "is_configured": true, 00:11:01.734 "data_offset": 0, 00:11:01.734 "data_size": 65536 00:11:01.734 }, 00:11:01.734 { 00:11:01.734 "name": "BaseBdev4", 00:11:01.734 "uuid": "71d7be7c-512d-49a4-bf92-48ee589c74cd", 00:11:01.734 "is_configured": true, 00:11:01.734 "data_offset": 0, 00:11:01.734 "data_size": 65536 00:11:01.734 } 00:11:01.734 ] 00:11:01.734 }' 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.734 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.300 [2024-12-07 17:27:35.538963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.300 BaseBdev1 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.300 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.301 [ 00:11:02.301 { 00:11:02.301 "name": "BaseBdev1", 00:11:02.301 "aliases": [ 00:11:02.301 "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44" 00:11:02.301 ], 00:11:02.301 "product_name": "Malloc disk", 00:11:02.301 "block_size": 512, 00:11:02.301 "num_blocks": 65536, 00:11:02.301 "uuid": "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44", 00:11:02.301 "assigned_rate_limits": { 00:11:02.301 "rw_ios_per_sec": 0, 00:11:02.301 "rw_mbytes_per_sec": 0, 00:11:02.301 "r_mbytes_per_sec": 0, 00:11:02.301 "w_mbytes_per_sec": 0 00:11:02.301 }, 00:11:02.301 "claimed": true, 00:11:02.301 "claim_type": "exclusive_write", 00:11:02.301 "zoned": false, 00:11:02.301 "supported_io_types": { 00:11:02.301 "read": true, 00:11:02.301 "write": true, 00:11:02.301 "unmap": true, 00:11:02.301 "flush": true, 00:11:02.301 "reset": true, 00:11:02.301 "nvme_admin": false, 00:11:02.301 "nvme_io": false, 00:11:02.301 "nvme_io_md": false, 00:11:02.301 "write_zeroes": true, 00:11:02.301 "zcopy": true, 00:11:02.301 "get_zone_info": false, 00:11:02.301 "zone_management": false, 00:11:02.301 "zone_append": false, 00:11:02.301 "compare": false, 00:11:02.301 "compare_and_write": false, 00:11:02.301 "abort": true, 00:11:02.301 "seek_hole": false, 00:11:02.301 "seek_data": false, 00:11:02.301 "copy": true, 00:11:02.301 "nvme_iov_md": false 00:11:02.301 }, 00:11:02.301 "memory_domains": [ 00:11:02.301 { 00:11:02.301 "dma_device_id": "system", 00:11:02.301 "dma_device_type": 1 00:11:02.301 }, 00:11:02.301 { 00:11:02.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.301 "dma_device_type": 2 00:11:02.301 } 00:11:02.301 ], 00:11:02.301 "driver_specific": {} 00:11:02.301 } 00:11:02.301 ] 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.301 "name": "Existed_Raid", 00:11:02.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.301 "strip_size_kb": 64, 00:11:02.301 "state": "configuring", 00:11:02.301 "raid_level": "concat", 00:11:02.301 "superblock": false, 00:11:02.301 "num_base_bdevs": 4, 00:11:02.301 "num_base_bdevs_discovered": 3, 00:11:02.301 "num_base_bdevs_operational": 4, 00:11:02.301 "base_bdevs_list": [ 00:11:02.301 { 00:11:02.301 "name": "BaseBdev1", 00:11:02.301 "uuid": "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44", 00:11:02.301 "is_configured": true, 00:11:02.301 "data_offset": 0, 00:11:02.301 "data_size": 65536 00:11:02.301 }, 00:11:02.301 { 00:11:02.301 "name": null, 00:11:02.301 "uuid": "2f1872f3-44d2-4730-86ec-9cc275c3af47", 00:11:02.301 "is_configured": false, 00:11:02.301 "data_offset": 0, 00:11:02.301 "data_size": 65536 00:11:02.301 }, 00:11:02.301 { 00:11:02.301 "name": "BaseBdev3", 00:11:02.301 "uuid": "117ca057-c815-4130-aa87-fde134d4b142", 00:11:02.301 "is_configured": true, 00:11:02.301 "data_offset": 0, 00:11:02.301 "data_size": 65536 00:11:02.301 }, 00:11:02.301 { 00:11:02.301 "name": "BaseBdev4", 00:11:02.301 "uuid": "71d7be7c-512d-49a4-bf92-48ee589c74cd", 00:11:02.301 "is_configured": true, 00:11:02.301 "data_offset": 0, 00:11:02.301 "data_size": 65536 00:11:02.301 } 00:11:02.301 ] 00:11:02.301 }' 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.301 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.867 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.867 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.867 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.867 17:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:02.867 17:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.867 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:02.867 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:02.867 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.867 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.867 [2024-12-07 17:27:36.038165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:02.867 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.868 "name": "Existed_Raid", 00:11:02.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.868 "strip_size_kb": 64, 00:11:02.868 "state": "configuring", 00:11:02.868 "raid_level": "concat", 00:11:02.868 "superblock": false, 00:11:02.868 "num_base_bdevs": 4, 00:11:02.868 "num_base_bdevs_discovered": 2, 00:11:02.868 "num_base_bdevs_operational": 4, 00:11:02.868 "base_bdevs_list": [ 00:11:02.868 { 00:11:02.868 "name": "BaseBdev1", 00:11:02.868 "uuid": "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44", 00:11:02.868 "is_configured": true, 00:11:02.868 "data_offset": 0, 00:11:02.868 "data_size": 65536 00:11:02.868 }, 00:11:02.868 { 00:11:02.868 "name": null, 00:11:02.868 "uuid": "2f1872f3-44d2-4730-86ec-9cc275c3af47", 00:11:02.868 "is_configured": false, 00:11:02.868 "data_offset": 0, 00:11:02.868 "data_size": 65536 00:11:02.868 }, 00:11:02.868 { 00:11:02.868 "name": null, 00:11:02.868 "uuid": "117ca057-c815-4130-aa87-fde134d4b142", 00:11:02.868 "is_configured": false, 00:11:02.868 "data_offset": 0, 00:11:02.868 "data_size": 65536 00:11:02.868 }, 00:11:02.868 { 00:11:02.868 "name": "BaseBdev4", 00:11:02.868 "uuid": "71d7be7c-512d-49a4-bf92-48ee589c74cd", 00:11:02.868 "is_configured": true, 00:11:02.868 "data_offset": 0, 00:11:02.868 "data_size": 65536 00:11:02.868 } 00:11:02.868 ] 00:11:02.868 }' 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.868 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.126 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.126 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:03.126 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.126 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.385 [2024-12-07 17:27:36.553232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.385 "name": "Existed_Raid", 00:11:03.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.385 "strip_size_kb": 64, 00:11:03.385 "state": "configuring", 00:11:03.385 "raid_level": "concat", 00:11:03.385 "superblock": false, 00:11:03.385 "num_base_bdevs": 4, 00:11:03.385 "num_base_bdevs_discovered": 3, 00:11:03.385 "num_base_bdevs_operational": 4, 00:11:03.385 "base_bdevs_list": [ 00:11:03.385 { 00:11:03.385 "name": "BaseBdev1", 00:11:03.385 "uuid": "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44", 00:11:03.385 "is_configured": true, 00:11:03.385 "data_offset": 0, 00:11:03.385 "data_size": 65536 00:11:03.385 }, 00:11:03.385 { 00:11:03.385 "name": null, 00:11:03.385 "uuid": "2f1872f3-44d2-4730-86ec-9cc275c3af47", 00:11:03.385 "is_configured": false, 00:11:03.385 "data_offset": 0, 00:11:03.385 "data_size": 65536 00:11:03.385 }, 00:11:03.385 { 00:11:03.385 "name": "BaseBdev3", 00:11:03.385 "uuid": "117ca057-c815-4130-aa87-fde134d4b142", 00:11:03.385 "is_configured": true, 00:11:03.385 "data_offset": 0, 00:11:03.385 "data_size": 65536 00:11:03.385 }, 00:11:03.385 { 00:11:03.385 "name": "BaseBdev4", 00:11:03.385 "uuid": "71d7be7c-512d-49a4-bf92-48ee589c74cd", 00:11:03.385 "is_configured": true, 00:11:03.385 "data_offset": 0, 00:11:03.385 "data_size": 65536 00:11:03.385 } 00:11:03.385 ] 00:11:03.385 }' 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.385 17:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.953 [2024-12-07 17:27:37.088401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.953 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.953 "name": "Existed_Raid", 00:11:03.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.953 "strip_size_kb": 64, 00:11:03.953 "state": "configuring", 00:11:03.953 "raid_level": "concat", 00:11:03.953 "superblock": false, 00:11:03.953 "num_base_bdevs": 4, 00:11:03.953 "num_base_bdevs_discovered": 2, 00:11:03.953 "num_base_bdevs_operational": 4, 00:11:03.953 "base_bdevs_list": [ 00:11:03.953 { 00:11:03.953 "name": null, 00:11:03.953 "uuid": "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44", 00:11:03.953 "is_configured": false, 00:11:03.953 "data_offset": 0, 00:11:03.953 "data_size": 65536 00:11:03.953 }, 00:11:03.953 { 00:11:03.953 "name": null, 00:11:03.953 "uuid": "2f1872f3-44d2-4730-86ec-9cc275c3af47", 00:11:03.953 "is_configured": false, 00:11:03.953 "data_offset": 0, 00:11:03.953 "data_size": 65536 00:11:03.953 }, 00:11:03.953 { 00:11:03.953 "name": "BaseBdev3", 00:11:03.953 "uuid": "117ca057-c815-4130-aa87-fde134d4b142", 00:11:03.953 "is_configured": true, 00:11:03.953 "data_offset": 0, 00:11:03.953 "data_size": 65536 00:11:03.953 }, 00:11:03.953 { 00:11:03.954 "name": "BaseBdev4", 00:11:03.954 "uuid": "71d7be7c-512d-49a4-bf92-48ee589c74cd", 00:11:03.954 "is_configured": true, 00:11:03.954 "data_offset": 0, 00:11:03.954 "data_size": 65536 00:11:03.954 } 00:11:03.954 ] 00:11:03.954 }' 00:11:03.954 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.954 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.522 [2024-12-07 17:27:37.688736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.522 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.523 "name": "Existed_Raid", 00:11:04.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.523 "strip_size_kb": 64, 00:11:04.523 "state": "configuring", 00:11:04.523 "raid_level": "concat", 00:11:04.523 "superblock": false, 00:11:04.523 "num_base_bdevs": 4, 00:11:04.523 "num_base_bdevs_discovered": 3, 00:11:04.523 "num_base_bdevs_operational": 4, 00:11:04.523 "base_bdevs_list": [ 00:11:04.523 { 00:11:04.523 "name": null, 00:11:04.523 "uuid": "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44", 00:11:04.523 "is_configured": false, 00:11:04.523 "data_offset": 0, 00:11:04.523 "data_size": 65536 00:11:04.523 }, 00:11:04.523 { 00:11:04.523 "name": "BaseBdev2", 00:11:04.523 "uuid": "2f1872f3-44d2-4730-86ec-9cc275c3af47", 00:11:04.523 "is_configured": true, 00:11:04.523 "data_offset": 0, 00:11:04.523 "data_size": 65536 00:11:04.523 }, 00:11:04.523 { 00:11:04.523 "name": "BaseBdev3", 00:11:04.523 "uuid": "117ca057-c815-4130-aa87-fde134d4b142", 00:11:04.523 "is_configured": true, 00:11:04.523 "data_offset": 0, 00:11:04.523 "data_size": 65536 00:11:04.523 }, 00:11:04.523 { 00:11:04.523 "name": "BaseBdev4", 00:11:04.523 "uuid": "71d7be7c-512d-49a4-bf92-48ee589c74cd", 00:11:04.523 "is_configured": true, 00:11:04.523 "data_offset": 0, 00:11:04.523 "data_size": 65536 00:11:04.523 } 00:11:04.523 ] 00:11:04.523 }' 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.523 17:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.782 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:04.782 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.782 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.782 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.782 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.782 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:04.782 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.782 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:04.782 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.782 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2681bc39-13ec-4e41-9fad-2a9ae7f2ab44 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.041 [2024-12-07 17:27:38.244576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:05.041 [2024-12-07 17:27:38.244750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:05.041 [2024-12-07 17:27:38.244775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:05.041 [2024-12-07 17:27:38.245094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:05.041 [2024-12-07 17:27:38.245290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:05.041 [2024-12-07 17:27:38.245331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:05.041 [2024-12-07 17:27:38.245632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.041 NewBaseBdev 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.041 [ 00:11:05.041 { 00:11:05.041 "name": "NewBaseBdev", 00:11:05.041 "aliases": [ 00:11:05.041 "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44" 00:11:05.041 ], 00:11:05.041 "product_name": "Malloc disk", 00:11:05.041 "block_size": 512, 00:11:05.041 "num_blocks": 65536, 00:11:05.041 "uuid": "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44", 00:11:05.041 "assigned_rate_limits": { 00:11:05.041 "rw_ios_per_sec": 0, 00:11:05.041 "rw_mbytes_per_sec": 0, 00:11:05.041 "r_mbytes_per_sec": 0, 00:11:05.041 "w_mbytes_per_sec": 0 00:11:05.041 }, 00:11:05.041 "claimed": true, 00:11:05.041 "claim_type": "exclusive_write", 00:11:05.041 "zoned": false, 00:11:05.041 "supported_io_types": { 00:11:05.041 "read": true, 00:11:05.041 "write": true, 00:11:05.041 "unmap": true, 00:11:05.041 "flush": true, 00:11:05.041 "reset": true, 00:11:05.041 "nvme_admin": false, 00:11:05.041 "nvme_io": false, 00:11:05.041 "nvme_io_md": false, 00:11:05.041 "write_zeroes": true, 00:11:05.041 "zcopy": true, 00:11:05.041 "get_zone_info": false, 00:11:05.041 "zone_management": false, 00:11:05.041 "zone_append": false, 00:11:05.041 "compare": false, 00:11:05.041 "compare_and_write": false, 00:11:05.041 "abort": true, 00:11:05.041 "seek_hole": false, 00:11:05.041 "seek_data": false, 00:11:05.041 "copy": true, 00:11:05.041 "nvme_iov_md": false 00:11:05.041 }, 00:11:05.041 "memory_domains": [ 00:11:05.041 { 00:11:05.041 "dma_device_id": "system", 00:11:05.041 "dma_device_type": 1 00:11:05.041 }, 00:11:05.041 { 00:11:05.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.041 "dma_device_type": 2 00:11:05.041 } 00:11:05.041 ], 00:11:05.041 "driver_specific": {} 00:11:05.041 } 00:11:05.041 ] 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.041 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.042 "name": "Existed_Raid", 00:11:05.042 "uuid": "de754a4c-e512-4aa4-88bd-7749a7e74fec", 00:11:05.042 "strip_size_kb": 64, 00:11:05.042 "state": "online", 00:11:05.042 "raid_level": "concat", 00:11:05.042 "superblock": false, 00:11:05.042 "num_base_bdevs": 4, 00:11:05.042 "num_base_bdevs_discovered": 4, 00:11:05.042 "num_base_bdevs_operational": 4, 00:11:05.042 "base_bdevs_list": [ 00:11:05.042 { 00:11:05.042 "name": "NewBaseBdev", 00:11:05.042 "uuid": "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44", 00:11:05.042 "is_configured": true, 00:11:05.042 "data_offset": 0, 00:11:05.042 "data_size": 65536 00:11:05.042 }, 00:11:05.042 { 00:11:05.042 "name": "BaseBdev2", 00:11:05.042 "uuid": "2f1872f3-44d2-4730-86ec-9cc275c3af47", 00:11:05.042 "is_configured": true, 00:11:05.042 "data_offset": 0, 00:11:05.042 "data_size": 65536 00:11:05.042 }, 00:11:05.042 { 00:11:05.042 "name": "BaseBdev3", 00:11:05.042 "uuid": "117ca057-c815-4130-aa87-fde134d4b142", 00:11:05.042 "is_configured": true, 00:11:05.042 "data_offset": 0, 00:11:05.042 "data_size": 65536 00:11:05.042 }, 00:11:05.042 { 00:11:05.042 "name": "BaseBdev4", 00:11:05.042 "uuid": "71d7be7c-512d-49a4-bf92-48ee589c74cd", 00:11:05.042 "is_configured": true, 00:11:05.042 "data_offset": 0, 00:11:05.042 "data_size": 65536 00:11:05.042 } 00:11:05.042 ] 00:11:05.042 }' 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.042 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.610 [2024-12-07 17:27:38.728105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.610 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.610 "name": "Existed_Raid", 00:11:05.610 "aliases": [ 00:11:05.610 "de754a4c-e512-4aa4-88bd-7749a7e74fec" 00:11:05.610 ], 00:11:05.610 "product_name": "Raid Volume", 00:11:05.610 "block_size": 512, 00:11:05.610 "num_blocks": 262144, 00:11:05.611 "uuid": "de754a4c-e512-4aa4-88bd-7749a7e74fec", 00:11:05.611 "assigned_rate_limits": { 00:11:05.611 "rw_ios_per_sec": 0, 00:11:05.611 "rw_mbytes_per_sec": 0, 00:11:05.611 "r_mbytes_per_sec": 0, 00:11:05.611 "w_mbytes_per_sec": 0 00:11:05.611 }, 00:11:05.611 "claimed": false, 00:11:05.611 "zoned": false, 00:11:05.611 "supported_io_types": { 00:11:05.611 "read": true, 00:11:05.611 "write": true, 00:11:05.611 "unmap": true, 00:11:05.611 "flush": true, 00:11:05.611 "reset": true, 00:11:05.611 "nvme_admin": false, 00:11:05.611 "nvme_io": false, 00:11:05.611 "nvme_io_md": false, 00:11:05.611 "write_zeroes": true, 00:11:05.611 "zcopy": false, 00:11:05.611 "get_zone_info": false, 00:11:05.611 "zone_management": false, 00:11:05.611 "zone_append": false, 00:11:05.611 "compare": false, 00:11:05.611 "compare_and_write": false, 00:11:05.611 "abort": false, 00:11:05.611 "seek_hole": false, 00:11:05.611 "seek_data": false, 00:11:05.611 "copy": false, 00:11:05.611 "nvme_iov_md": false 00:11:05.611 }, 00:11:05.611 "memory_domains": [ 00:11:05.611 { 00:11:05.611 "dma_device_id": "system", 00:11:05.611 "dma_device_type": 1 00:11:05.611 }, 00:11:05.611 { 00:11:05.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.611 "dma_device_type": 2 00:11:05.611 }, 00:11:05.611 { 00:11:05.611 "dma_device_id": "system", 00:11:05.611 "dma_device_type": 1 00:11:05.611 }, 00:11:05.611 { 00:11:05.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.611 "dma_device_type": 2 00:11:05.611 }, 00:11:05.611 { 00:11:05.611 "dma_device_id": "system", 00:11:05.611 "dma_device_type": 1 00:11:05.611 }, 00:11:05.611 { 00:11:05.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.611 "dma_device_type": 2 00:11:05.611 }, 00:11:05.611 { 00:11:05.611 "dma_device_id": "system", 00:11:05.611 "dma_device_type": 1 00:11:05.611 }, 00:11:05.611 { 00:11:05.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.611 "dma_device_type": 2 00:11:05.611 } 00:11:05.611 ], 00:11:05.611 "driver_specific": { 00:11:05.611 "raid": { 00:11:05.611 "uuid": "de754a4c-e512-4aa4-88bd-7749a7e74fec", 00:11:05.611 "strip_size_kb": 64, 00:11:05.611 "state": "online", 00:11:05.611 "raid_level": "concat", 00:11:05.611 "superblock": false, 00:11:05.611 "num_base_bdevs": 4, 00:11:05.611 "num_base_bdevs_discovered": 4, 00:11:05.611 "num_base_bdevs_operational": 4, 00:11:05.611 "base_bdevs_list": [ 00:11:05.611 { 00:11:05.611 "name": "NewBaseBdev", 00:11:05.611 "uuid": "2681bc39-13ec-4e41-9fad-2a9ae7f2ab44", 00:11:05.611 "is_configured": true, 00:11:05.611 "data_offset": 0, 00:11:05.611 "data_size": 65536 00:11:05.611 }, 00:11:05.611 { 00:11:05.611 "name": "BaseBdev2", 00:11:05.611 "uuid": "2f1872f3-44d2-4730-86ec-9cc275c3af47", 00:11:05.611 "is_configured": true, 00:11:05.611 "data_offset": 0, 00:11:05.611 "data_size": 65536 00:11:05.611 }, 00:11:05.611 { 00:11:05.611 "name": "BaseBdev3", 00:11:05.611 "uuid": "117ca057-c815-4130-aa87-fde134d4b142", 00:11:05.611 "is_configured": true, 00:11:05.611 "data_offset": 0, 00:11:05.611 "data_size": 65536 00:11:05.611 }, 00:11:05.611 { 00:11:05.611 "name": "BaseBdev4", 00:11:05.611 "uuid": "71d7be7c-512d-49a4-bf92-48ee589c74cd", 00:11:05.611 "is_configured": true, 00:11:05.611 "data_offset": 0, 00:11:05.611 "data_size": 65536 00:11:05.611 } 00:11:05.611 ] 00:11:05.611 } 00:11:05.611 } 00:11:05.611 }' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:05.611 BaseBdev2 00:11:05.611 BaseBdev3 00:11:05.611 BaseBdev4' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.611 17:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.871 [2024-12-07 17:27:39.035247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.871 [2024-12-07 17:27:39.035363] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.871 [2024-12-07 17:27:39.035480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.871 [2024-12-07 17:27:39.035584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.871 [2024-12-07 17:27:39.035630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71300 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71300 ']' 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71300 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71300 00:11:05.871 killing process with pid 71300 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71300' 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71300 00:11:05.871 [2024-12-07 17:27:39.083775] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.871 17:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71300 00:11:06.439 [2024-12-07 17:27:39.513324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.816 ************************************ 00:11:07.816 END TEST raid_state_function_test 00:11:07.816 ************************************ 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:07.816 00:11:07.816 real 0m11.677s 00:11:07.816 user 0m18.195s 00:11:07.816 sys 0m2.233s 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.816 17:27:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:07.816 17:27:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.816 17:27:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.816 17:27:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.816 ************************************ 00:11:07.816 START TEST raid_state_function_test_sb 00:11:07.816 ************************************ 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71969 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71969' 00:11:07.816 Process raid pid: 71969 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71969 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71969 ']' 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:07.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.816 17:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.816 [2024-12-07 17:27:40.926586] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:07.816 [2024-12-07 17:27:40.926795] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.816 [2024-12-07 17:27:41.080594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.075 [2024-12-07 17:27:41.221671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.334 [2024-12-07 17:27:41.457984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.334 [2024-12-07 17:27:41.458147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.593 [2024-12-07 17:27:41.778577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.593 [2024-12-07 17:27:41.778646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.593 [2024-12-07 17:27:41.778657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.593 [2024-12-07 17:27:41.778667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.593 [2024-12-07 17:27:41.778680] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:08.593 [2024-12-07 17:27:41.778691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:08.593 [2024-12-07 17:27:41.778697] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:08.593 [2024-12-07 17:27:41.778708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.593 "name": "Existed_Raid", 00:11:08.593 "uuid": "71211a0a-0d3d-4980-ae80-feab968cf4c7", 00:11:08.593 "strip_size_kb": 64, 00:11:08.593 "state": "configuring", 00:11:08.593 "raid_level": "concat", 00:11:08.593 "superblock": true, 00:11:08.593 "num_base_bdevs": 4, 00:11:08.593 "num_base_bdevs_discovered": 0, 00:11:08.593 "num_base_bdevs_operational": 4, 00:11:08.593 "base_bdevs_list": [ 00:11:08.593 { 00:11:08.593 "name": "BaseBdev1", 00:11:08.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.593 "is_configured": false, 00:11:08.593 "data_offset": 0, 00:11:08.593 "data_size": 0 00:11:08.593 }, 00:11:08.593 { 00:11:08.593 "name": "BaseBdev2", 00:11:08.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.593 "is_configured": false, 00:11:08.593 "data_offset": 0, 00:11:08.593 "data_size": 0 00:11:08.593 }, 00:11:08.593 { 00:11:08.593 "name": "BaseBdev3", 00:11:08.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.593 "is_configured": false, 00:11:08.593 "data_offset": 0, 00:11:08.593 "data_size": 0 00:11:08.593 }, 00:11:08.593 { 00:11:08.593 "name": "BaseBdev4", 00:11:08.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.593 "is_configured": false, 00:11:08.593 "data_offset": 0, 00:11:08.593 "data_size": 0 00:11:08.593 } 00:11:08.593 ] 00:11:08.593 }' 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.593 17:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.853 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:08.853 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.853 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.853 [2024-12-07 17:27:42.225757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:08.853 [2024-12-07 17:27:42.225882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:08.853 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.853 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.853 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.853 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 [2024-12-07 17:27:42.233750] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.113 [2024-12-07 17:27:42.233834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.113 [2024-12-07 17:27:42.233866] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.113 [2024-12-07 17:27:42.233892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.113 [2024-12-07 17:27:42.233924] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.113 [2024-12-07 17:27:42.233957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.113 [2024-12-07 17:27:42.233989] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:09.113 [2024-12-07 17:27:42.234013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 [2024-12-07 17:27:42.284560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.113 BaseBdev1 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 [ 00:11:09.113 { 00:11:09.113 "name": "BaseBdev1", 00:11:09.113 "aliases": [ 00:11:09.113 "9a89e4cb-76f6-4816-9c44-d79160b7f2a5" 00:11:09.113 ], 00:11:09.113 "product_name": "Malloc disk", 00:11:09.113 "block_size": 512, 00:11:09.113 "num_blocks": 65536, 00:11:09.113 "uuid": "9a89e4cb-76f6-4816-9c44-d79160b7f2a5", 00:11:09.113 "assigned_rate_limits": { 00:11:09.113 "rw_ios_per_sec": 0, 00:11:09.113 "rw_mbytes_per_sec": 0, 00:11:09.113 "r_mbytes_per_sec": 0, 00:11:09.113 "w_mbytes_per_sec": 0 00:11:09.113 }, 00:11:09.113 "claimed": true, 00:11:09.113 "claim_type": "exclusive_write", 00:11:09.113 "zoned": false, 00:11:09.113 "supported_io_types": { 00:11:09.113 "read": true, 00:11:09.113 "write": true, 00:11:09.113 "unmap": true, 00:11:09.113 "flush": true, 00:11:09.113 "reset": true, 00:11:09.113 "nvme_admin": false, 00:11:09.113 "nvme_io": false, 00:11:09.113 "nvme_io_md": false, 00:11:09.113 "write_zeroes": true, 00:11:09.113 "zcopy": true, 00:11:09.113 "get_zone_info": false, 00:11:09.113 "zone_management": false, 00:11:09.113 "zone_append": false, 00:11:09.113 "compare": false, 00:11:09.113 "compare_and_write": false, 00:11:09.113 "abort": true, 00:11:09.113 "seek_hole": false, 00:11:09.113 "seek_data": false, 00:11:09.113 "copy": true, 00:11:09.113 "nvme_iov_md": false 00:11:09.113 }, 00:11:09.113 "memory_domains": [ 00:11:09.113 { 00:11:09.113 "dma_device_id": "system", 00:11:09.113 "dma_device_type": 1 00:11:09.113 }, 00:11:09.113 { 00:11:09.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.113 "dma_device_type": 2 00:11:09.113 } 00:11:09.113 ], 00:11:09.113 "driver_specific": {} 00:11:09.113 } 00:11:09.113 ] 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.113 "name": "Existed_Raid", 00:11:09.113 "uuid": "c78570e3-70c7-4a89-862d-a30ea08d3b2f", 00:11:09.113 "strip_size_kb": 64, 00:11:09.113 "state": "configuring", 00:11:09.113 "raid_level": "concat", 00:11:09.113 "superblock": true, 00:11:09.113 "num_base_bdevs": 4, 00:11:09.113 "num_base_bdevs_discovered": 1, 00:11:09.113 "num_base_bdevs_operational": 4, 00:11:09.113 "base_bdevs_list": [ 00:11:09.113 { 00:11:09.113 "name": "BaseBdev1", 00:11:09.113 "uuid": "9a89e4cb-76f6-4816-9c44-d79160b7f2a5", 00:11:09.113 "is_configured": true, 00:11:09.113 "data_offset": 2048, 00:11:09.113 "data_size": 63488 00:11:09.113 }, 00:11:09.113 { 00:11:09.113 "name": "BaseBdev2", 00:11:09.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.113 "is_configured": false, 00:11:09.113 "data_offset": 0, 00:11:09.113 "data_size": 0 00:11:09.113 }, 00:11:09.113 { 00:11:09.113 "name": "BaseBdev3", 00:11:09.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.113 "is_configured": false, 00:11:09.113 "data_offset": 0, 00:11:09.113 "data_size": 0 00:11:09.113 }, 00:11:09.113 { 00:11:09.113 "name": "BaseBdev4", 00:11:09.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.113 "is_configured": false, 00:11:09.113 "data_offset": 0, 00:11:09.113 "data_size": 0 00:11:09.113 } 00:11:09.113 ] 00:11:09.113 }' 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.113 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.373 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.373 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.373 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.373 [2024-12-07 17:27:42.743897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.373 [2024-12-07 17:27:42.743989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:09.373 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.373 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.373 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.373 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.373 [2024-12-07 17:27:42.751906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.632 [2024-12-07 17:27:42.754118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.633 [2024-12-07 17:27:42.754162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.633 [2024-12-07 17:27:42.754172] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.633 [2024-12-07 17:27:42.754184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.633 [2024-12-07 17:27:42.754193] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:09.633 [2024-12-07 17:27:42.754202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.633 "name": "Existed_Raid", 00:11:09.633 "uuid": "21312718-a57b-4a15-983d-699986dd4a0d", 00:11:09.633 "strip_size_kb": 64, 00:11:09.633 "state": "configuring", 00:11:09.633 "raid_level": "concat", 00:11:09.633 "superblock": true, 00:11:09.633 "num_base_bdevs": 4, 00:11:09.633 "num_base_bdevs_discovered": 1, 00:11:09.633 "num_base_bdevs_operational": 4, 00:11:09.633 "base_bdevs_list": [ 00:11:09.633 { 00:11:09.633 "name": "BaseBdev1", 00:11:09.633 "uuid": "9a89e4cb-76f6-4816-9c44-d79160b7f2a5", 00:11:09.633 "is_configured": true, 00:11:09.633 "data_offset": 2048, 00:11:09.633 "data_size": 63488 00:11:09.633 }, 00:11:09.633 { 00:11:09.633 "name": "BaseBdev2", 00:11:09.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.633 "is_configured": false, 00:11:09.633 "data_offset": 0, 00:11:09.633 "data_size": 0 00:11:09.633 }, 00:11:09.633 { 00:11:09.633 "name": "BaseBdev3", 00:11:09.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.633 "is_configured": false, 00:11:09.633 "data_offset": 0, 00:11:09.633 "data_size": 0 00:11:09.633 }, 00:11:09.633 { 00:11:09.633 "name": "BaseBdev4", 00:11:09.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.633 "is_configured": false, 00:11:09.633 "data_offset": 0, 00:11:09.633 "data_size": 0 00:11:09.633 } 00:11:09.633 ] 00:11:09.633 }' 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.633 17:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.893 [2024-12-07 17:27:43.217391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.893 BaseBdev2 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.893 [ 00:11:09.893 { 00:11:09.893 "name": "BaseBdev2", 00:11:09.893 "aliases": [ 00:11:09.893 "730baf3d-9f1e-4501-9e0f-549a828e34db" 00:11:09.893 ], 00:11:09.893 "product_name": "Malloc disk", 00:11:09.893 "block_size": 512, 00:11:09.893 "num_blocks": 65536, 00:11:09.893 "uuid": "730baf3d-9f1e-4501-9e0f-549a828e34db", 00:11:09.893 "assigned_rate_limits": { 00:11:09.893 "rw_ios_per_sec": 0, 00:11:09.893 "rw_mbytes_per_sec": 0, 00:11:09.893 "r_mbytes_per_sec": 0, 00:11:09.893 "w_mbytes_per_sec": 0 00:11:09.893 }, 00:11:09.893 "claimed": true, 00:11:09.893 "claim_type": "exclusive_write", 00:11:09.893 "zoned": false, 00:11:09.893 "supported_io_types": { 00:11:09.893 "read": true, 00:11:09.893 "write": true, 00:11:09.893 "unmap": true, 00:11:09.893 "flush": true, 00:11:09.893 "reset": true, 00:11:09.893 "nvme_admin": false, 00:11:09.893 "nvme_io": false, 00:11:09.893 "nvme_io_md": false, 00:11:09.893 "write_zeroes": true, 00:11:09.893 "zcopy": true, 00:11:09.893 "get_zone_info": false, 00:11:09.893 "zone_management": false, 00:11:09.893 "zone_append": false, 00:11:09.893 "compare": false, 00:11:09.893 "compare_and_write": false, 00:11:09.893 "abort": true, 00:11:09.893 "seek_hole": false, 00:11:09.893 "seek_data": false, 00:11:09.893 "copy": true, 00:11:09.893 "nvme_iov_md": false 00:11:09.893 }, 00:11:09.893 "memory_domains": [ 00:11:09.893 { 00:11:09.893 "dma_device_id": "system", 00:11:09.893 "dma_device_type": 1 00:11:09.893 }, 00:11:09.893 { 00:11:09.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.893 "dma_device_type": 2 00:11:09.893 } 00:11:09.893 ], 00:11:09.893 "driver_specific": {} 00:11:09.893 } 00:11:09.893 ] 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.893 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.153 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.153 "name": "Existed_Raid", 00:11:10.153 "uuid": "21312718-a57b-4a15-983d-699986dd4a0d", 00:11:10.153 "strip_size_kb": 64, 00:11:10.153 "state": "configuring", 00:11:10.153 "raid_level": "concat", 00:11:10.153 "superblock": true, 00:11:10.153 "num_base_bdevs": 4, 00:11:10.153 "num_base_bdevs_discovered": 2, 00:11:10.153 "num_base_bdevs_operational": 4, 00:11:10.153 "base_bdevs_list": [ 00:11:10.153 { 00:11:10.153 "name": "BaseBdev1", 00:11:10.153 "uuid": "9a89e4cb-76f6-4816-9c44-d79160b7f2a5", 00:11:10.153 "is_configured": true, 00:11:10.153 "data_offset": 2048, 00:11:10.153 "data_size": 63488 00:11:10.153 }, 00:11:10.153 { 00:11:10.153 "name": "BaseBdev2", 00:11:10.153 "uuid": "730baf3d-9f1e-4501-9e0f-549a828e34db", 00:11:10.153 "is_configured": true, 00:11:10.153 "data_offset": 2048, 00:11:10.153 "data_size": 63488 00:11:10.153 }, 00:11:10.153 { 00:11:10.153 "name": "BaseBdev3", 00:11:10.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.153 "is_configured": false, 00:11:10.153 "data_offset": 0, 00:11:10.153 "data_size": 0 00:11:10.153 }, 00:11:10.153 { 00:11:10.153 "name": "BaseBdev4", 00:11:10.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.153 "is_configured": false, 00:11:10.153 "data_offset": 0, 00:11:10.153 "data_size": 0 00:11:10.153 } 00:11:10.153 ] 00:11:10.153 }' 00:11:10.153 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.153 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.412 [2024-12-07 17:27:43.678251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.412 BaseBdev3 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.412 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.412 [ 00:11:10.412 { 00:11:10.412 "name": "BaseBdev3", 00:11:10.412 "aliases": [ 00:11:10.412 "241292f3-d44d-42af-ae82-45d5f53368b6" 00:11:10.412 ], 00:11:10.412 "product_name": "Malloc disk", 00:11:10.412 "block_size": 512, 00:11:10.412 "num_blocks": 65536, 00:11:10.412 "uuid": "241292f3-d44d-42af-ae82-45d5f53368b6", 00:11:10.412 "assigned_rate_limits": { 00:11:10.412 "rw_ios_per_sec": 0, 00:11:10.412 "rw_mbytes_per_sec": 0, 00:11:10.412 "r_mbytes_per_sec": 0, 00:11:10.412 "w_mbytes_per_sec": 0 00:11:10.412 }, 00:11:10.412 "claimed": true, 00:11:10.412 "claim_type": "exclusive_write", 00:11:10.412 "zoned": false, 00:11:10.412 "supported_io_types": { 00:11:10.412 "read": true, 00:11:10.412 "write": true, 00:11:10.412 "unmap": true, 00:11:10.412 "flush": true, 00:11:10.412 "reset": true, 00:11:10.412 "nvme_admin": false, 00:11:10.412 "nvme_io": false, 00:11:10.412 "nvme_io_md": false, 00:11:10.412 "write_zeroes": true, 00:11:10.412 "zcopy": true, 00:11:10.412 "get_zone_info": false, 00:11:10.412 "zone_management": false, 00:11:10.412 "zone_append": false, 00:11:10.412 "compare": false, 00:11:10.412 "compare_and_write": false, 00:11:10.412 "abort": true, 00:11:10.412 "seek_hole": false, 00:11:10.412 "seek_data": false, 00:11:10.412 "copy": true, 00:11:10.412 "nvme_iov_md": false 00:11:10.412 }, 00:11:10.413 "memory_domains": [ 00:11:10.413 { 00:11:10.413 "dma_device_id": "system", 00:11:10.413 "dma_device_type": 1 00:11:10.413 }, 00:11:10.413 { 00:11:10.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.413 "dma_device_type": 2 00:11:10.413 } 00:11:10.413 ], 00:11:10.413 "driver_specific": {} 00:11:10.413 } 00:11:10.413 ] 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.413 "name": "Existed_Raid", 00:11:10.413 "uuid": "21312718-a57b-4a15-983d-699986dd4a0d", 00:11:10.413 "strip_size_kb": 64, 00:11:10.413 "state": "configuring", 00:11:10.413 "raid_level": "concat", 00:11:10.413 "superblock": true, 00:11:10.413 "num_base_bdevs": 4, 00:11:10.413 "num_base_bdevs_discovered": 3, 00:11:10.413 "num_base_bdevs_operational": 4, 00:11:10.413 "base_bdevs_list": [ 00:11:10.413 { 00:11:10.413 "name": "BaseBdev1", 00:11:10.413 "uuid": "9a89e4cb-76f6-4816-9c44-d79160b7f2a5", 00:11:10.413 "is_configured": true, 00:11:10.413 "data_offset": 2048, 00:11:10.413 "data_size": 63488 00:11:10.413 }, 00:11:10.413 { 00:11:10.413 "name": "BaseBdev2", 00:11:10.413 "uuid": "730baf3d-9f1e-4501-9e0f-549a828e34db", 00:11:10.413 "is_configured": true, 00:11:10.413 "data_offset": 2048, 00:11:10.413 "data_size": 63488 00:11:10.413 }, 00:11:10.413 { 00:11:10.413 "name": "BaseBdev3", 00:11:10.413 "uuid": "241292f3-d44d-42af-ae82-45d5f53368b6", 00:11:10.413 "is_configured": true, 00:11:10.413 "data_offset": 2048, 00:11:10.413 "data_size": 63488 00:11:10.413 }, 00:11:10.413 { 00:11:10.413 "name": "BaseBdev4", 00:11:10.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.413 "is_configured": false, 00:11:10.413 "data_offset": 0, 00:11:10.413 "data_size": 0 00:11:10.413 } 00:11:10.413 ] 00:11:10.413 }' 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.413 17:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.983 [2024-12-07 17:27:44.181895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:10.983 [2024-12-07 17:27:44.182244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:10.983 [2024-12-07 17:27:44.182261] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:10.983 [2024-12-07 17:27:44.182550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:10.983 BaseBdev4 00:11:10.983 [2024-12-07 17:27:44.182714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:10.983 [2024-12-07 17:27:44.182727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:10.983 [2024-12-07 17:27:44.182867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.983 [ 00:11:10.983 { 00:11:10.983 "name": "BaseBdev4", 00:11:10.983 "aliases": [ 00:11:10.983 "f9c03c82-02a3-499c-9a25-2e1b41e9f231" 00:11:10.983 ], 00:11:10.983 "product_name": "Malloc disk", 00:11:10.983 "block_size": 512, 00:11:10.983 "num_blocks": 65536, 00:11:10.983 "uuid": "f9c03c82-02a3-499c-9a25-2e1b41e9f231", 00:11:10.983 "assigned_rate_limits": { 00:11:10.983 "rw_ios_per_sec": 0, 00:11:10.983 "rw_mbytes_per_sec": 0, 00:11:10.983 "r_mbytes_per_sec": 0, 00:11:10.983 "w_mbytes_per_sec": 0 00:11:10.983 }, 00:11:10.983 "claimed": true, 00:11:10.983 "claim_type": "exclusive_write", 00:11:10.983 "zoned": false, 00:11:10.983 "supported_io_types": { 00:11:10.983 "read": true, 00:11:10.983 "write": true, 00:11:10.983 "unmap": true, 00:11:10.983 "flush": true, 00:11:10.983 "reset": true, 00:11:10.983 "nvme_admin": false, 00:11:10.983 "nvme_io": false, 00:11:10.983 "nvme_io_md": false, 00:11:10.983 "write_zeroes": true, 00:11:10.983 "zcopy": true, 00:11:10.983 "get_zone_info": false, 00:11:10.983 "zone_management": false, 00:11:10.983 "zone_append": false, 00:11:10.983 "compare": false, 00:11:10.983 "compare_and_write": false, 00:11:10.983 "abort": true, 00:11:10.983 "seek_hole": false, 00:11:10.983 "seek_data": false, 00:11:10.983 "copy": true, 00:11:10.983 "nvme_iov_md": false 00:11:10.983 }, 00:11:10.983 "memory_domains": [ 00:11:10.983 { 00:11:10.983 "dma_device_id": "system", 00:11:10.983 "dma_device_type": 1 00:11:10.983 }, 00:11:10.983 { 00:11:10.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.983 "dma_device_type": 2 00:11:10.983 } 00:11:10.983 ], 00:11:10.983 "driver_specific": {} 00:11:10.983 } 00:11:10.983 ] 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.983 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.983 "name": "Existed_Raid", 00:11:10.983 "uuid": "21312718-a57b-4a15-983d-699986dd4a0d", 00:11:10.983 "strip_size_kb": 64, 00:11:10.983 "state": "online", 00:11:10.983 "raid_level": "concat", 00:11:10.983 "superblock": true, 00:11:10.983 "num_base_bdevs": 4, 00:11:10.983 "num_base_bdevs_discovered": 4, 00:11:10.983 "num_base_bdevs_operational": 4, 00:11:10.983 "base_bdevs_list": [ 00:11:10.983 { 00:11:10.983 "name": "BaseBdev1", 00:11:10.983 "uuid": "9a89e4cb-76f6-4816-9c44-d79160b7f2a5", 00:11:10.983 "is_configured": true, 00:11:10.983 "data_offset": 2048, 00:11:10.984 "data_size": 63488 00:11:10.984 }, 00:11:10.984 { 00:11:10.984 "name": "BaseBdev2", 00:11:10.984 "uuid": "730baf3d-9f1e-4501-9e0f-549a828e34db", 00:11:10.984 "is_configured": true, 00:11:10.984 "data_offset": 2048, 00:11:10.984 "data_size": 63488 00:11:10.984 }, 00:11:10.984 { 00:11:10.984 "name": "BaseBdev3", 00:11:10.984 "uuid": "241292f3-d44d-42af-ae82-45d5f53368b6", 00:11:10.984 "is_configured": true, 00:11:10.984 "data_offset": 2048, 00:11:10.984 "data_size": 63488 00:11:10.984 }, 00:11:10.984 { 00:11:10.984 "name": "BaseBdev4", 00:11:10.984 "uuid": "f9c03c82-02a3-499c-9a25-2e1b41e9f231", 00:11:10.984 "is_configured": true, 00:11:10.984 "data_offset": 2048, 00:11:10.984 "data_size": 63488 00:11:10.984 } 00:11:10.984 ] 00:11:10.984 }' 00:11:10.984 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.984 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.243 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.243 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:11.243 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.243 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.243 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.243 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.244 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:11.244 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.244 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.244 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.244 [2024-12-07 17:27:44.597532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.244 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.510 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.510 "name": "Existed_Raid", 00:11:11.510 "aliases": [ 00:11:11.510 "21312718-a57b-4a15-983d-699986dd4a0d" 00:11:11.510 ], 00:11:11.510 "product_name": "Raid Volume", 00:11:11.510 "block_size": 512, 00:11:11.510 "num_blocks": 253952, 00:11:11.510 "uuid": "21312718-a57b-4a15-983d-699986dd4a0d", 00:11:11.510 "assigned_rate_limits": { 00:11:11.510 "rw_ios_per_sec": 0, 00:11:11.510 "rw_mbytes_per_sec": 0, 00:11:11.510 "r_mbytes_per_sec": 0, 00:11:11.510 "w_mbytes_per_sec": 0 00:11:11.510 }, 00:11:11.510 "claimed": false, 00:11:11.510 "zoned": false, 00:11:11.510 "supported_io_types": { 00:11:11.510 "read": true, 00:11:11.510 "write": true, 00:11:11.510 "unmap": true, 00:11:11.510 "flush": true, 00:11:11.510 "reset": true, 00:11:11.510 "nvme_admin": false, 00:11:11.510 "nvme_io": false, 00:11:11.510 "nvme_io_md": false, 00:11:11.510 "write_zeroes": true, 00:11:11.510 "zcopy": false, 00:11:11.510 "get_zone_info": false, 00:11:11.510 "zone_management": false, 00:11:11.510 "zone_append": false, 00:11:11.510 "compare": false, 00:11:11.510 "compare_and_write": false, 00:11:11.510 "abort": false, 00:11:11.510 "seek_hole": false, 00:11:11.510 "seek_data": false, 00:11:11.510 "copy": false, 00:11:11.510 "nvme_iov_md": false 00:11:11.510 }, 00:11:11.510 "memory_domains": [ 00:11:11.510 { 00:11:11.510 "dma_device_id": "system", 00:11:11.510 "dma_device_type": 1 00:11:11.510 }, 00:11:11.510 { 00:11:11.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.510 "dma_device_type": 2 00:11:11.510 }, 00:11:11.510 { 00:11:11.510 "dma_device_id": "system", 00:11:11.510 "dma_device_type": 1 00:11:11.510 }, 00:11:11.510 { 00:11:11.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.510 "dma_device_type": 2 00:11:11.510 }, 00:11:11.510 { 00:11:11.510 "dma_device_id": "system", 00:11:11.510 "dma_device_type": 1 00:11:11.510 }, 00:11:11.510 { 00:11:11.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.510 "dma_device_type": 2 00:11:11.510 }, 00:11:11.510 { 00:11:11.510 "dma_device_id": "system", 00:11:11.510 "dma_device_type": 1 00:11:11.510 }, 00:11:11.510 { 00:11:11.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.510 "dma_device_type": 2 00:11:11.510 } 00:11:11.510 ], 00:11:11.510 "driver_specific": { 00:11:11.510 "raid": { 00:11:11.510 "uuid": "21312718-a57b-4a15-983d-699986dd4a0d", 00:11:11.510 "strip_size_kb": 64, 00:11:11.510 "state": "online", 00:11:11.510 "raid_level": "concat", 00:11:11.510 "superblock": true, 00:11:11.510 "num_base_bdevs": 4, 00:11:11.510 "num_base_bdevs_discovered": 4, 00:11:11.510 "num_base_bdevs_operational": 4, 00:11:11.510 "base_bdevs_list": [ 00:11:11.510 { 00:11:11.510 "name": "BaseBdev1", 00:11:11.510 "uuid": "9a89e4cb-76f6-4816-9c44-d79160b7f2a5", 00:11:11.510 "is_configured": true, 00:11:11.510 "data_offset": 2048, 00:11:11.510 "data_size": 63488 00:11:11.510 }, 00:11:11.510 { 00:11:11.510 "name": "BaseBdev2", 00:11:11.510 "uuid": "730baf3d-9f1e-4501-9e0f-549a828e34db", 00:11:11.510 "is_configured": true, 00:11:11.510 "data_offset": 2048, 00:11:11.510 "data_size": 63488 00:11:11.510 }, 00:11:11.510 { 00:11:11.510 "name": "BaseBdev3", 00:11:11.510 "uuid": "241292f3-d44d-42af-ae82-45d5f53368b6", 00:11:11.510 "is_configured": true, 00:11:11.511 "data_offset": 2048, 00:11:11.511 "data_size": 63488 00:11:11.511 }, 00:11:11.511 { 00:11:11.511 "name": "BaseBdev4", 00:11:11.511 "uuid": "f9c03c82-02a3-499c-9a25-2e1b41e9f231", 00:11:11.511 "is_configured": true, 00:11:11.511 "data_offset": 2048, 00:11:11.511 "data_size": 63488 00:11:11.511 } 00:11:11.511 ] 00:11:11.511 } 00:11:11.511 } 00:11:11.511 }' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:11.511 BaseBdev2 00:11:11.511 BaseBdev3 00:11:11.511 BaseBdev4' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.511 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.511 [2024-12-07 17:27:44.884756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.511 [2024-12-07 17:27:44.884789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.511 [2024-12-07 17:27:44.884844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.785 17:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.785 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.785 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.785 "name": "Existed_Raid", 00:11:11.785 "uuid": "21312718-a57b-4a15-983d-699986dd4a0d", 00:11:11.785 "strip_size_kb": 64, 00:11:11.785 "state": "offline", 00:11:11.785 "raid_level": "concat", 00:11:11.785 "superblock": true, 00:11:11.785 "num_base_bdevs": 4, 00:11:11.785 "num_base_bdevs_discovered": 3, 00:11:11.785 "num_base_bdevs_operational": 3, 00:11:11.785 "base_bdevs_list": [ 00:11:11.785 { 00:11:11.785 "name": null, 00:11:11.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.785 "is_configured": false, 00:11:11.785 "data_offset": 0, 00:11:11.785 "data_size": 63488 00:11:11.785 }, 00:11:11.785 { 00:11:11.785 "name": "BaseBdev2", 00:11:11.785 "uuid": "730baf3d-9f1e-4501-9e0f-549a828e34db", 00:11:11.785 "is_configured": true, 00:11:11.785 "data_offset": 2048, 00:11:11.785 "data_size": 63488 00:11:11.785 }, 00:11:11.785 { 00:11:11.785 "name": "BaseBdev3", 00:11:11.785 "uuid": "241292f3-d44d-42af-ae82-45d5f53368b6", 00:11:11.785 "is_configured": true, 00:11:11.785 "data_offset": 2048, 00:11:11.785 "data_size": 63488 00:11:11.785 }, 00:11:11.785 { 00:11:11.785 "name": "BaseBdev4", 00:11:11.785 "uuid": "f9c03c82-02a3-499c-9a25-2e1b41e9f231", 00:11:11.785 "is_configured": true, 00:11:11.785 "data_offset": 2048, 00:11:11.785 "data_size": 63488 00:11:11.785 } 00:11:11.785 ] 00:11:11.785 }' 00:11:11.785 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.785 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.044 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:12.044 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.044 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.044 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.044 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.044 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.044 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.301 [2024-12-07 17:27:45.437282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.301 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.301 [2024-12-07 17:27:45.593002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 [2024-12-07 17:27:45.743228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:12.559 [2024-12-07 17:27:45.743296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 BaseBdev2 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.559 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.817 [ 00:11:12.817 { 00:11:12.817 "name": "BaseBdev2", 00:11:12.817 "aliases": [ 00:11:12.817 "53d5505b-a5d9-4966-92bc-64b3c05eede6" 00:11:12.817 ], 00:11:12.817 "product_name": "Malloc disk", 00:11:12.817 "block_size": 512, 00:11:12.817 "num_blocks": 65536, 00:11:12.817 "uuid": "53d5505b-a5d9-4966-92bc-64b3c05eede6", 00:11:12.817 "assigned_rate_limits": { 00:11:12.817 "rw_ios_per_sec": 0, 00:11:12.817 "rw_mbytes_per_sec": 0, 00:11:12.817 "r_mbytes_per_sec": 0, 00:11:12.817 "w_mbytes_per_sec": 0 00:11:12.817 }, 00:11:12.817 "claimed": false, 00:11:12.817 "zoned": false, 00:11:12.817 "supported_io_types": { 00:11:12.817 "read": true, 00:11:12.817 "write": true, 00:11:12.817 "unmap": true, 00:11:12.817 "flush": true, 00:11:12.817 "reset": true, 00:11:12.817 "nvme_admin": false, 00:11:12.817 "nvme_io": false, 00:11:12.817 "nvme_io_md": false, 00:11:12.817 "write_zeroes": true, 00:11:12.817 "zcopy": true, 00:11:12.817 "get_zone_info": false, 00:11:12.817 "zone_management": false, 00:11:12.817 "zone_append": false, 00:11:12.817 "compare": false, 00:11:12.817 "compare_and_write": false, 00:11:12.817 "abort": true, 00:11:12.817 "seek_hole": false, 00:11:12.817 "seek_data": false, 00:11:12.817 "copy": true, 00:11:12.817 "nvme_iov_md": false 00:11:12.817 }, 00:11:12.817 "memory_domains": [ 00:11:12.817 { 00:11:12.817 "dma_device_id": "system", 00:11:12.817 "dma_device_type": 1 00:11:12.817 }, 00:11:12.817 { 00:11:12.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.817 "dma_device_type": 2 00:11:12.817 } 00:11:12.817 ], 00:11:12.817 "driver_specific": {} 00:11:12.817 } 00:11:12.817 ] 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.817 17:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.817 BaseBdev3 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.817 [ 00:11:12.817 { 00:11:12.817 "name": "BaseBdev3", 00:11:12.817 "aliases": [ 00:11:12.817 "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e" 00:11:12.817 ], 00:11:12.817 "product_name": "Malloc disk", 00:11:12.817 "block_size": 512, 00:11:12.817 "num_blocks": 65536, 00:11:12.817 "uuid": "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e", 00:11:12.817 "assigned_rate_limits": { 00:11:12.817 "rw_ios_per_sec": 0, 00:11:12.817 "rw_mbytes_per_sec": 0, 00:11:12.817 "r_mbytes_per_sec": 0, 00:11:12.817 "w_mbytes_per_sec": 0 00:11:12.817 }, 00:11:12.817 "claimed": false, 00:11:12.817 "zoned": false, 00:11:12.817 "supported_io_types": { 00:11:12.817 "read": true, 00:11:12.817 "write": true, 00:11:12.817 "unmap": true, 00:11:12.817 "flush": true, 00:11:12.817 "reset": true, 00:11:12.817 "nvme_admin": false, 00:11:12.817 "nvme_io": false, 00:11:12.817 "nvme_io_md": false, 00:11:12.817 "write_zeroes": true, 00:11:12.817 "zcopy": true, 00:11:12.817 "get_zone_info": false, 00:11:12.817 "zone_management": false, 00:11:12.817 "zone_append": false, 00:11:12.817 "compare": false, 00:11:12.817 "compare_and_write": false, 00:11:12.817 "abort": true, 00:11:12.817 "seek_hole": false, 00:11:12.817 "seek_data": false, 00:11:12.817 "copy": true, 00:11:12.817 "nvme_iov_md": false 00:11:12.817 }, 00:11:12.817 "memory_domains": [ 00:11:12.817 { 00:11:12.817 "dma_device_id": "system", 00:11:12.817 "dma_device_type": 1 00:11:12.817 }, 00:11:12.817 { 00:11:12.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.817 "dma_device_type": 2 00:11:12.817 } 00:11:12.817 ], 00:11:12.817 "driver_specific": {} 00:11:12.817 } 00:11:12.817 ] 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.817 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.818 BaseBdev4 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.818 [ 00:11:12.818 { 00:11:12.818 "name": "BaseBdev4", 00:11:12.818 "aliases": [ 00:11:12.818 "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7" 00:11:12.818 ], 00:11:12.818 "product_name": "Malloc disk", 00:11:12.818 "block_size": 512, 00:11:12.818 "num_blocks": 65536, 00:11:12.818 "uuid": "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7", 00:11:12.818 "assigned_rate_limits": { 00:11:12.818 "rw_ios_per_sec": 0, 00:11:12.818 "rw_mbytes_per_sec": 0, 00:11:12.818 "r_mbytes_per_sec": 0, 00:11:12.818 "w_mbytes_per_sec": 0 00:11:12.818 }, 00:11:12.818 "claimed": false, 00:11:12.818 "zoned": false, 00:11:12.818 "supported_io_types": { 00:11:12.818 "read": true, 00:11:12.818 "write": true, 00:11:12.818 "unmap": true, 00:11:12.818 "flush": true, 00:11:12.818 "reset": true, 00:11:12.818 "nvme_admin": false, 00:11:12.818 "nvme_io": false, 00:11:12.818 "nvme_io_md": false, 00:11:12.818 "write_zeroes": true, 00:11:12.818 "zcopy": true, 00:11:12.818 "get_zone_info": false, 00:11:12.818 "zone_management": false, 00:11:12.818 "zone_append": false, 00:11:12.818 "compare": false, 00:11:12.818 "compare_and_write": false, 00:11:12.818 "abort": true, 00:11:12.818 "seek_hole": false, 00:11:12.818 "seek_data": false, 00:11:12.818 "copy": true, 00:11:12.818 "nvme_iov_md": false 00:11:12.818 }, 00:11:12.818 "memory_domains": [ 00:11:12.818 { 00:11:12.818 "dma_device_id": "system", 00:11:12.818 "dma_device_type": 1 00:11:12.818 }, 00:11:12.818 { 00:11:12.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.818 "dma_device_type": 2 00:11:12.818 } 00:11:12.818 ], 00:11:12.818 "driver_specific": {} 00:11:12.818 } 00:11:12.818 ] 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.818 [2024-12-07 17:27:46.101498] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.818 [2024-12-07 17:27:46.101633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.818 [2024-12-07 17:27:46.101663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.818 [2024-12-07 17:27:46.103757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.818 [2024-12-07 17:27:46.103811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.818 "name": "Existed_Raid", 00:11:12.818 "uuid": "41612dd5-4ea7-4fcc-95b7-f2dd625319ca", 00:11:12.818 "strip_size_kb": 64, 00:11:12.818 "state": "configuring", 00:11:12.818 "raid_level": "concat", 00:11:12.818 "superblock": true, 00:11:12.818 "num_base_bdevs": 4, 00:11:12.818 "num_base_bdevs_discovered": 3, 00:11:12.818 "num_base_bdevs_operational": 4, 00:11:12.818 "base_bdevs_list": [ 00:11:12.818 { 00:11:12.818 "name": "BaseBdev1", 00:11:12.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.818 "is_configured": false, 00:11:12.818 "data_offset": 0, 00:11:12.818 "data_size": 0 00:11:12.818 }, 00:11:12.818 { 00:11:12.818 "name": "BaseBdev2", 00:11:12.818 "uuid": "53d5505b-a5d9-4966-92bc-64b3c05eede6", 00:11:12.818 "is_configured": true, 00:11:12.818 "data_offset": 2048, 00:11:12.818 "data_size": 63488 00:11:12.818 }, 00:11:12.818 { 00:11:12.818 "name": "BaseBdev3", 00:11:12.818 "uuid": "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e", 00:11:12.818 "is_configured": true, 00:11:12.818 "data_offset": 2048, 00:11:12.818 "data_size": 63488 00:11:12.818 }, 00:11:12.818 { 00:11:12.818 "name": "BaseBdev4", 00:11:12.818 "uuid": "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7", 00:11:12.818 "is_configured": true, 00:11:12.818 "data_offset": 2048, 00:11:12.818 "data_size": 63488 00:11:12.818 } 00:11:12.818 ] 00:11:12.818 }' 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.818 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.384 [2024-12-07 17:27:46.532706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.384 "name": "Existed_Raid", 00:11:13.384 "uuid": "41612dd5-4ea7-4fcc-95b7-f2dd625319ca", 00:11:13.384 "strip_size_kb": 64, 00:11:13.384 "state": "configuring", 00:11:13.384 "raid_level": "concat", 00:11:13.384 "superblock": true, 00:11:13.384 "num_base_bdevs": 4, 00:11:13.384 "num_base_bdevs_discovered": 2, 00:11:13.384 "num_base_bdevs_operational": 4, 00:11:13.384 "base_bdevs_list": [ 00:11:13.384 { 00:11:13.384 "name": "BaseBdev1", 00:11:13.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.384 "is_configured": false, 00:11:13.384 "data_offset": 0, 00:11:13.384 "data_size": 0 00:11:13.384 }, 00:11:13.384 { 00:11:13.384 "name": null, 00:11:13.384 "uuid": "53d5505b-a5d9-4966-92bc-64b3c05eede6", 00:11:13.384 "is_configured": false, 00:11:13.384 "data_offset": 0, 00:11:13.384 "data_size": 63488 00:11:13.384 }, 00:11:13.384 { 00:11:13.384 "name": "BaseBdev3", 00:11:13.384 "uuid": "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e", 00:11:13.384 "is_configured": true, 00:11:13.384 "data_offset": 2048, 00:11:13.384 "data_size": 63488 00:11:13.384 }, 00:11:13.384 { 00:11:13.384 "name": "BaseBdev4", 00:11:13.384 "uuid": "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7", 00:11:13.384 "is_configured": true, 00:11:13.384 "data_offset": 2048, 00:11:13.384 "data_size": 63488 00:11:13.384 } 00:11:13.384 ] 00:11:13.384 }' 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.384 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.642 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:13.642 17:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.642 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.642 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.642 17:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.642 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:13.642 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.642 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.642 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.900 BaseBdev1 00:11:13.900 [2024-12-07 17:27:47.061182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.900 [ 00:11:13.900 { 00:11:13.900 "name": "BaseBdev1", 00:11:13.900 "aliases": [ 00:11:13.900 "4f4822e6-e33d-4618-bee7-3abc0a651407" 00:11:13.900 ], 00:11:13.900 "product_name": "Malloc disk", 00:11:13.900 "block_size": 512, 00:11:13.900 "num_blocks": 65536, 00:11:13.900 "uuid": "4f4822e6-e33d-4618-bee7-3abc0a651407", 00:11:13.900 "assigned_rate_limits": { 00:11:13.900 "rw_ios_per_sec": 0, 00:11:13.900 "rw_mbytes_per_sec": 0, 00:11:13.900 "r_mbytes_per_sec": 0, 00:11:13.900 "w_mbytes_per_sec": 0 00:11:13.900 }, 00:11:13.900 "claimed": true, 00:11:13.900 "claim_type": "exclusive_write", 00:11:13.900 "zoned": false, 00:11:13.900 "supported_io_types": { 00:11:13.900 "read": true, 00:11:13.900 "write": true, 00:11:13.900 "unmap": true, 00:11:13.900 "flush": true, 00:11:13.900 "reset": true, 00:11:13.900 "nvme_admin": false, 00:11:13.900 "nvme_io": false, 00:11:13.900 "nvme_io_md": false, 00:11:13.900 "write_zeroes": true, 00:11:13.900 "zcopy": true, 00:11:13.900 "get_zone_info": false, 00:11:13.900 "zone_management": false, 00:11:13.900 "zone_append": false, 00:11:13.900 "compare": false, 00:11:13.900 "compare_and_write": false, 00:11:13.900 "abort": true, 00:11:13.900 "seek_hole": false, 00:11:13.900 "seek_data": false, 00:11:13.900 "copy": true, 00:11:13.900 "nvme_iov_md": false 00:11:13.900 }, 00:11:13.900 "memory_domains": [ 00:11:13.900 { 00:11:13.900 "dma_device_id": "system", 00:11:13.900 "dma_device_type": 1 00:11:13.900 }, 00:11:13.900 { 00:11:13.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.900 "dma_device_type": 2 00:11:13.900 } 00:11:13.900 ], 00:11:13.900 "driver_specific": {} 00:11:13.900 } 00:11:13.900 ] 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.900 "name": "Existed_Raid", 00:11:13.900 "uuid": "41612dd5-4ea7-4fcc-95b7-f2dd625319ca", 00:11:13.900 "strip_size_kb": 64, 00:11:13.900 "state": "configuring", 00:11:13.900 "raid_level": "concat", 00:11:13.900 "superblock": true, 00:11:13.900 "num_base_bdevs": 4, 00:11:13.900 "num_base_bdevs_discovered": 3, 00:11:13.900 "num_base_bdevs_operational": 4, 00:11:13.900 "base_bdevs_list": [ 00:11:13.900 { 00:11:13.900 "name": "BaseBdev1", 00:11:13.900 "uuid": "4f4822e6-e33d-4618-bee7-3abc0a651407", 00:11:13.900 "is_configured": true, 00:11:13.900 "data_offset": 2048, 00:11:13.900 "data_size": 63488 00:11:13.900 }, 00:11:13.900 { 00:11:13.900 "name": null, 00:11:13.900 "uuid": "53d5505b-a5d9-4966-92bc-64b3c05eede6", 00:11:13.900 "is_configured": false, 00:11:13.900 "data_offset": 0, 00:11:13.900 "data_size": 63488 00:11:13.900 }, 00:11:13.900 { 00:11:13.900 "name": "BaseBdev3", 00:11:13.900 "uuid": "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e", 00:11:13.900 "is_configured": true, 00:11:13.900 "data_offset": 2048, 00:11:13.900 "data_size": 63488 00:11:13.900 }, 00:11:13.900 { 00:11:13.900 "name": "BaseBdev4", 00:11:13.900 "uuid": "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7", 00:11:13.900 "is_configured": true, 00:11:13.900 "data_offset": 2048, 00:11:13.900 "data_size": 63488 00:11:13.900 } 00:11:13.900 ] 00:11:13.900 }' 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.900 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.158 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:14.158 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.158 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.158 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 [2024-12-07 17:27:47.572385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.417 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.417 "name": "Existed_Raid", 00:11:14.417 "uuid": "41612dd5-4ea7-4fcc-95b7-f2dd625319ca", 00:11:14.417 "strip_size_kb": 64, 00:11:14.417 "state": "configuring", 00:11:14.417 "raid_level": "concat", 00:11:14.417 "superblock": true, 00:11:14.417 "num_base_bdevs": 4, 00:11:14.417 "num_base_bdevs_discovered": 2, 00:11:14.417 "num_base_bdevs_operational": 4, 00:11:14.417 "base_bdevs_list": [ 00:11:14.417 { 00:11:14.417 "name": "BaseBdev1", 00:11:14.417 "uuid": "4f4822e6-e33d-4618-bee7-3abc0a651407", 00:11:14.417 "is_configured": true, 00:11:14.417 "data_offset": 2048, 00:11:14.417 "data_size": 63488 00:11:14.417 }, 00:11:14.417 { 00:11:14.418 "name": null, 00:11:14.418 "uuid": "53d5505b-a5d9-4966-92bc-64b3c05eede6", 00:11:14.418 "is_configured": false, 00:11:14.418 "data_offset": 0, 00:11:14.418 "data_size": 63488 00:11:14.418 }, 00:11:14.418 { 00:11:14.418 "name": null, 00:11:14.418 "uuid": "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e", 00:11:14.418 "is_configured": false, 00:11:14.418 "data_offset": 0, 00:11:14.418 "data_size": 63488 00:11:14.418 }, 00:11:14.418 { 00:11:14.418 "name": "BaseBdev4", 00:11:14.418 "uuid": "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7", 00:11:14.418 "is_configured": true, 00:11:14.418 "data_offset": 2048, 00:11:14.418 "data_size": 63488 00:11:14.418 } 00:11:14.418 ] 00:11:14.418 }' 00:11:14.418 17:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.418 17:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.677 [2024-12-07 17:27:48.043620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.677 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.937 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.937 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.937 "name": "Existed_Raid", 00:11:14.937 "uuid": "41612dd5-4ea7-4fcc-95b7-f2dd625319ca", 00:11:14.937 "strip_size_kb": 64, 00:11:14.937 "state": "configuring", 00:11:14.937 "raid_level": "concat", 00:11:14.937 "superblock": true, 00:11:14.937 "num_base_bdevs": 4, 00:11:14.937 "num_base_bdevs_discovered": 3, 00:11:14.937 "num_base_bdevs_operational": 4, 00:11:14.937 "base_bdevs_list": [ 00:11:14.937 { 00:11:14.937 "name": "BaseBdev1", 00:11:14.937 "uuid": "4f4822e6-e33d-4618-bee7-3abc0a651407", 00:11:14.937 "is_configured": true, 00:11:14.937 "data_offset": 2048, 00:11:14.937 "data_size": 63488 00:11:14.937 }, 00:11:14.937 { 00:11:14.937 "name": null, 00:11:14.937 "uuid": "53d5505b-a5d9-4966-92bc-64b3c05eede6", 00:11:14.937 "is_configured": false, 00:11:14.937 "data_offset": 0, 00:11:14.937 "data_size": 63488 00:11:14.937 }, 00:11:14.937 { 00:11:14.937 "name": "BaseBdev3", 00:11:14.937 "uuid": "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e", 00:11:14.937 "is_configured": true, 00:11:14.937 "data_offset": 2048, 00:11:14.937 "data_size": 63488 00:11:14.937 }, 00:11:14.937 { 00:11:14.937 "name": "BaseBdev4", 00:11:14.937 "uuid": "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7", 00:11:14.937 "is_configured": true, 00:11:14.937 "data_offset": 2048, 00:11:14.937 "data_size": 63488 00:11:14.937 } 00:11:14.937 ] 00:11:14.937 }' 00:11:14.937 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.937 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.196 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.196 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.196 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.196 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:15.196 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.196 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:15.196 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:15.196 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.196 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.196 [2024-12-07 17:27:48.498862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.456 "name": "Existed_Raid", 00:11:15.456 "uuid": "41612dd5-4ea7-4fcc-95b7-f2dd625319ca", 00:11:15.456 "strip_size_kb": 64, 00:11:15.456 "state": "configuring", 00:11:15.456 "raid_level": "concat", 00:11:15.456 "superblock": true, 00:11:15.456 "num_base_bdevs": 4, 00:11:15.456 "num_base_bdevs_discovered": 2, 00:11:15.456 "num_base_bdevs_operational": 4, 00:11:15.456 "base_bdevs_list": [ 00:11:15.456 { 00:11:15.456 "name": null, 00:11:15.456 "uuid": "4f4822e6-e33d-4618-bee7-3abc0a651407", 00:11:15.456 "is_configured": false, 00:11:15.456 "data_offset": 0, 00:11:15.456 "data_size": 63488 00:11:15.456 }, 00:11:15.456 { 00:11:15.456 "name": null, 00:11:15.456 "uuid": "53d5505b-a5d9-4966-92bc-64b3c05eede6", 00:11:15.456 "is_configured": false, 00:11:15.456 "data_offset": 0, 00:11:15.456 "data_size": 63488 00:11:15.456 }, 00:11:15.456 { 00:11:15.456 "name": "BaseBdev3", 00:11:15.456 "uuid": "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e", 00:11:15.456 "is_configured": true, 00:11:15.456 "data_offset": 2048, 00:11:15.456 "data_size": 63488 00:11:15.456 }, 00:11:15.456 { 00:11:15.456 "name": "BaseBdev4", 00:11:15.456 "uuid": "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7", 00:11:15.456 "is_configured": true, 00:11:15.456 "data_offset": 2048, 00:11:15.456 "data_size": 63488 00:11:15.456 } 00:11:15.456 ] 00:11:15.456 }' 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.456 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.716 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:15.716 17:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.716 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.716 17:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.716 [2024-12-07 17:27:49.019578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.716 "name": "Existed_Raid", 00:11:15.716 "uuid": "41612dd5-4ea7-4fcc-95b7-f2dd625319ca", 00:11:15.716 "strip_size_kb": 64, 00:11:15.716 "state": "configuring", 00:11:15.716 "raid_level": "concat", 00:11:15.716 "superblock": true, 00:11:15.716 "num_base_bdevs": 4, 00:11:15.716 "num_base_bdevs_discovered": 3, 00:11:15.716 "num_base_bdevs_operational": 4, 00:11:15.716 "base_bdevs_list": [ 00:11:15.716 { 00:11:15.716 "name": null, 00:11:15.716 "uuid": "4f4822e6-e33d-4618-bee7-3abc0a651407", 00:11:15.716 "is_configured": false, 00:11:15.716 "data_offset": 0, 00:11:15.716 "data_size": 63488 00:11:15.716 }, 00:11:15.716 { 00:11:15.716 "name": "BaseBdev2", 00:11:15.716 "uuid": "53d5505b-a5d9-4966-92bc-64b3c05eede6", 00:11:15.716 "is_configured": true, 00:11:15.716 "data_offset": 2048, 00:11:15.716 "data_size": 63488 00:11:15.716 }, 00:11:15.716 { 00:11:15.716 "name": "BaseBdev3", 00:11:15.716 "uuid": "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e", 00:11:15.716 "is_configured": true, 00:11:15.716 "data_offset": 2048, 00:11:15.716 "data_size": 63488 00:11:15.716 }, 00:11:15.716 { 00:11:15.716 "name": "BaseBdev4", 00:11:15.716 "uuid": "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7", 00:11:15.716 "is_configured": true, 00:11:15.716 "data_offset": 2048, 00:11:15.716 "data_size": 63488 00:11:15.716 } 00:11:15.716 ] 00:11:15.716 }' 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.716 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4f4822e6-e33d-4618-bee7-3abc0a651407 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 NewBaseBdev 00:11:16.286 [2024-12-07 17:27:49.570117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:16.286 [2024-12-07 17:27:49.570392] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:16.286 [2024-12-07 17:27:49.570406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:16.286 [2024-12-07 17:27:49.570699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:16.286 [2024-12-07 17:27:49.570864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:16.286 [2024-12-07 17:27:49.570876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:16.286 [2024-12-07 17:27:49.571042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 [ 00:11:16.286 { 00:11:16.286 "name": "NewBaseBdev", 00:11:16.286 "aliases": [ 00:11:16.286 "4f4822e6-e33d-4618-bee7-3abc0a651407" 00:11:16.286 ], 00:11:16.286 "product_name": "Malloc disk", 00:11:16.286 "block_size": 512, 00:11:16.286 "num_blocks": 65536, 00:11:16.286 "uuid": "4f4822e6-e33d-4618-bee7-3abc0a651407", 00:11:16.286 "assigned_rate_limits": { 00:11:16.286 "rw_ios_per_sec": 0, 00:11:16.286 "rw_mbytes_per_sec": 0, 00:11:16.286 "r_mbytes_per_sec": 0, 00:11:16.286 "w_mbytes_per_sec": 0 00:11:16.286 }, 00:11:16.286 "claimed": true, 00:11:16.286 "claim_type": "exclusive_write", 00:11:16.286 "zoned": false, 00:11:16.286 "supported_io_types": { 00:11:16.286 "read": true, 00:11:16.286 "write": true, 00:11:16.286 "unmap": true, 00:11:16.286 "flush": true, 00:11:16.286 "reset": true, 00:11:16.286 "nvme_admin": false, 00:11:16.286 "nvme_io": false, 00:11:16.286 "nvme_io_md": false, 00:11:16.286 "write_zeroes": true, 00:11:16.286 "zcopy": true, 00:11:16.286 "get_zone_info": false, 00:11:16.286 "zone_management": false, 00:11:16.286 "zone_append": false, 00:11:16.286 "compare": false, 00:11:16.286 "compare_and_write": false, 00:11:16.286 "abort": true, 00:11:16.286 "seek_hole": false, 00:11:16.286 "seek_data": false, 00:11:16.286 "copy": true, 00:11:16.286 "nvme_iov_md": false 00:11:16.286 }, 00:11:16.286 "memory_domains": [ 00:11:16.286 { 00:11:16.286 "dma_device_id": "system", 00:11:16.286 "dma_device_type": 1 00:11:16.286 }, 00:11:16.286 { 00:11:16.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.286 "dma_device_type": 2 00:11:16.286 } 00:11:16.286 ], 00:11:16.286 "driver_specific": {} 00:11:16.286 } 00:11:16.286 ] 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.286 "name": "Existed_Raid", 00:11:16.286 "uuid": "41612dd5-4ea7-4fcc-95b7-f2dd625319ca", 00:11:16.286 "strip_size_kb": 64, 00:11:16.286 "state": "online", 00:11:16.286 "raid_level": "concat", 00:11:16.286 "superblock": true, 00:11:16.286 "num_base_bdevs": 4, 00:11:16.286 "num_base_bdevs_discovered": 4, 00:11:16.286 "num_base_bdevs_operational": 4, 00:11:16.286 "base_bdevs_list": [ 00:11:16.286 { 00:11:16.286 "name": "NewBaseBdev", 00:11:16.286 "uuid": "4f4822e6-e33d-4618-bee7-3abc0a651407", 00:11:16.286 "is_configured": true, 00:11:16.286 "data_offset": 2048, 00:11:16.286 "data_size": 63488 00:11:16.286 }, 00:11:16.286 { 00:11:16.286 "name": "BaseBdev2", 00:11:16.286 "uuid": "53d5505b-a5d9-4966-92bc-64b3c05eede6", 00:11:16.286 "is_configured": true, 00:11:16.286 "data_offset": 2048, 00:11:16.286 "data_size": 63488 00:11:16.286 }, 00:11:16.286 { 00:11:16.286 "name": "BaseBdev3", 00:11:16.286 "uuid": "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e", 00:11:16.286 "is_configured": true, 00:11:16.286 "data_offset": 2048, 00:11:16.286 "data_size": 63488 00:11:16.286 }, 00:11:16.286 { 00:11:16.286 "name": "BaseBdev4", 00:11:16.286 "uuid": "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7", 00:11:16.286 "is_configured": true, 00:11:16.286 "data_offset": 2048, 00:11:16.286 "data_size": 63488 00:11:16.286 } 00:11:16.286 ] 00:11:16.286 }' 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.286 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.855 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.855 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.855 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.855 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.855 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.855 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.855 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.855 17:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.855 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.855 17:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.855 [2024-12-07 17:27:50.005827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.855 "name": "Existed_Raid", 00:11:16.855 "aliases": [ 00:11:16.855 "41612dd5-4ea7-4fcc-95b7-f2dd625319ca" 00:11:16.855 ], 00:11:16.855 "product_name": "Raid Volume", 00:11:16.855 "block_size": 512, 00:11:16.855 "num_blocks": 253952, 00:11:16.855 "uuid": "41612dd5-4ea7-4fcc-95b7-f2dd625319ca", 00:11:16.855 "assigned_rate_limits": { 00:11:16.855 "rw_ios_per_sec": 0, 00:11:16.855 "rw_mbytes_per_sec": 0, 00:11:16.855 "r_mbytes_per_sec": 0, 00:11:16.855 "w_mbytes_per_sec": 0 00:11:16.855 }, 00:11:16.855 "claimed": false, 00:11:16.855 "zoned": false, 00:11:16.855 "supported_io_types": { 00:11:16.855 "read": true, 00:11:16.855 "write": true, 00:11:16.855 "unmap": true, 00:11:16.855 "flush": true, 00:11:16.855 "reset": true, 00:11:16.855 "nvme_admin": false, 00:11:16.855 "nvme_io": false, 00:11:16.855 "nvme_io_md": false, 00:11:16.855 "write_zeroes": true, 00:11:16.855 "zcopy": false, 00:11:16.855 "get_zone_info": false, 00:11:16.855 "zone_management": false, 00:11:16.855 "zone_append": false, 00:11:16.855 "compare": false, 00:11:16.855 "compare_and_write": false, 00:11:16.855 "abort": false, 00:11:16.855 "seek_hole": false, 00:11:16.855 "seek_data": false, 00:11:16.855 "copy": false, 00:11:16.855 "nvme_iov_md": false 00:11:16.855 }, 00:11:16.855 "memory_domains": [ 00:11:16.855 { 00:11:16.855 "dma_device_id": "system", 00:11:16.855 "dma_device_type": 1 00:11:16.855 }, 00:11:16.855 { 00:11:16.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.855 "dma_device_type": 2 00:11:16.855 }, 00:11:16.855 { 00:11:16.855 "dma_device_id": "system", 00:11:16.855 "dma_device_type": 1 00:11:16.855 }, 00:11:16.855 { 00:11:16.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.855 "dma_device_type": 2 00:11:16.855 }, 00:11:16.855 { 00:11:16.855 "dma_device_id": "system", 00:11:16.855 "dma_device_type": 1 00:11:16.855 }, 00:11:16.855 { 00:11:16.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.855 "dma_device_type": 2 00:11:16.855 }, 00:11:16.855 { 00:11:16.855 "dma_device_id": "system", 00:11:16.855 "dma_device_type": 1 00:11:16.855 }, 00:11:16.855 { 00:11:16.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.855 "dma_device_type": 2 00:11:16.855 } 00:11:16.855 ], 00:11:16.855 "driver_specific": { 00:11:16.855 "raid": { 00:11:16.855 "uuid": "41612dd5-4ea7-4fcc-95b7-f2dd625319ca", 00:11:16.855 "strip_size_kb": 64, 00:11:16.855 "state": "online", 00:11:16.855 "raid_level": "concat", 00:11:16.855 "superblock": true, 00:11:16.855 "num_base_bdevs": 4, 00:11:16.855 "num_base_bdevs_discovered": 4, 00:11:16.855 "num_base_bdevs_operational": 4, 00:11:16.855 "base_bdevs_list": [ 00:11:16.855 { 00:11:16.855 "name": "NewBaseBdev", 00:11:16.855 "uuid": "4f4822e6-e33d-4618-bee7-3abc0a651407", 00:11:16.855 "is_configured": true, 00:11:16.855 "data_offset": 2048, 00:11:16.855 "data_size": 63488 00:11:16.855 }, 00:11:16.855 { 00:11:16.855 "name": "BaseBdev2", 00:11:16.855 "uuid": "53d5505b-a5d9-4966-92bc-64b3c05eede6", 00:11:16.855 "is_configured": true, 00:11:16.855 "data_offset": 2048, 00:11:16.855 "data_size": 63488 00:11:16.855 }, 00:11:16.855 { 00:11:16.855 "name": "BaseBdev3", 00:11:16.855 "uuid": "83b32c37-38eb-4fb1-9ab7-8eebc4cc490e", 00:11:16.855 "is_configured": true, 00:11:16.855 "data_offset": 2048, 00:11:16.855 "data_size": 63488 00:11:16.855 }, 00:11:16.855 { 00:11:16.855 "name": "BaseBdev4", 00:11:16.855 "uuid": "70b69093-4ec8-4a3f-b356-74b9cd8c0bf7", 00:11:16.855 "is_configured": true, 00:11:16.855 "data_offset": 2048, 00:11:16.855 "data_size": 63488 00:11:16.855 } 00:11:16.855 ] 00:11:16.855 } 00:11:16.855 } 00:11:16.855 }' 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:16.855 BaseBdev2 00:11:16.855 BaseBdev3 00:11:16.855 BaseBdev4' 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.855 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.113 [2024-12-07 17:27:50.304897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:17.113 [2024-12-07 17:27:50.305018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.113 [2024-12-07 17:27:50.305130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.113 [2024-12-07 17:27:50.305233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.113 [2024-12-07 17:27:50.305282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71969 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71969 ']' 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71969 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.113 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71969 00:11:17.113 killing process with pid 71969 00:11:17.114 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.114 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.114 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71969' 00:11:17.114 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71969 00:11:17.114 17:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71969 00:11:17.114 [2024-12-07 17:27:50.342087] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.680 [2024-12-07 17:27:50.780580] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.059 17:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:19.059 00:11:19.059 real 0m11.201s 00:11:19.059 user 0m17.452s 00:11:19.059 sys 0m2.053s 00:11:19.059 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.059 17:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 ************************************ 00:11:19.059 END TEST raid_state_function_test_sb 00:11:19.059 ************************************ 00:11:19.059 17:27:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:19.059 17:27:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:19.059 17:27:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.059 17:27:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 ************************************ 00:11:19.059 START TEST raid_superblock_test 00:11:19.059 ************************************ 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72641 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72641 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72641 ']' 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 17:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:19.059 [2024-12-07 17:27:52.178029] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:19.059 [2024-12-07 17:27:52.178140] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72641 ] 00:11:19.059 [2024-12-07 17:27:52.332325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.318 [2024-12-07 17:27:52.459704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.578 [2024-12-07 17:27:52.707441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.578 [2024-12-07 17:27:52.707493] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.838 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.838 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:19.838 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:19.838 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.838 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:19.838 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:19.838 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:19.838 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.838 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.838 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.839 malloc1 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.839 [2024-12-07 17:27:53.065644] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:19.839 [2024-12-07 17:27:53.065714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.839 [2024-12-07 17:27:53.065740] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:19.839 [2024-12-07 17:27:53.065750] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.839 [2024-12-07 17:27:53.068174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.839 pt1 00:11:19.839 [2024-12-07 17:27:53.068298] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.839 malloc2 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.839 [2024-12-07 17:27:53.130222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:19.839 [2024-12-07 17:27:53.130284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.839 [2024-12-07 17:27:53.130313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:19.839 [2024-12-07 17:27:53.130322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.839 [2024-12-07 17:27:53.132703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.839 [2024-12-07 17:27:53.132740] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:19.839 pt2 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.839 malloc3 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.839 [2024-12-07 17:27:53.200462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:19.839 [2024-12-07 17:27:53.200523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.839 [2024-12-07 17:27:53.200546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:19.839 [2024-12-07 17:27:53.200556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.839 [2024-12-07 17:27:53.203010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.839 [2024-12-07 17:27:53.203139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:19.839 pt3 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.839 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.099 malloc4 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.099 [2024-12-07 17:27:53.262252] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:20.099 [2024-12-07 17:27:53.262403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.099 [2024-12-07 17:27:53.262432] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:20.099 [2024-12-07 17:27:53.262442] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.099 [2024-12-07 17:27:53.264886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.099 [2024-12-07 17:27:53.264924] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:20.099 pt4 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.099 [2024-12-07 17:27:53.274271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:20.099 [2024-12-07 17:27:53.276326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:20.099 [2024-12-07 17:27:53.276413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:20.099 [2024-12-07 17:27:53.276462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:20.099 [2024-12-07 17:27:53.276637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:20.099 [2024-12-07 17:27:53.276649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:20.099 [2024-12-07 17:27:53.276909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:20.099 [2024-12-07 17:27:53.277124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:20.099 [2024-12-07 17:27:53.277139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:20.099 [2024-12-07 17:27:53.277289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.099 "name": "raid_bdev1", 00:11:20.099 "uuid": "d630b2be-90fa-4225-9f8c-1b691ecc0810", 00:11:20.099 "strip_size_kb": 64, 00:11:20.099 "state": "online", 00:11:20.099 "raid_level": "concat", 00:11:20.099 "superblock": true, 00:11:20.099 "num_base_bdevs": 4, 00:11:20.099 "num_base_bdevs_discovered": 4, 00:11:20.099 "num_base_bdevs_operational": 4, 00:11:20.099 "base_bdevs_list": [ 00:11:20.099 { 00:11:20.099 "name": "pt1", 00:11:20.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.099 "is_configured": true, 00:11:20.099 "data_offset": 2048, 00:11:20.099 "data_size": 63488 00:11:20.099 }, 00:11:20.099 { 00:11:20.099 "name": "pt2", 00:11:20.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.099 "is_configured": true, 00:11:20.099 "data_offset": 2048, 00:11:20.099 "data_size": 63488 00:11:20.099 }, 00:11:20.099 { 00:11:20.099 "name": "pt3", 00:11:20.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.099 "is_configured": true, 00:11:20.099 "data_offset": 2048, 00:11:20.099 "data_size": 63488 00:11:20.099 }, 00:11:20.099 { 00:11:20.099 "name": "pt4", 00:11:20.099 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.099 "is_configured": true, 00:11:20.099 "data_offset": 2048, 00:11:20.099 "data_size": 63488 00:11:20.099 } 00:11:20.099 ] 00:11:20.099 }' 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.099 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.359 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:20.359 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:20.359 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.359 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.359 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.359 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.359 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.359 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.359 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.359 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.359 [2024-12-07 17:27:53.729840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.620 "name": "raid_bdev1", 00:11:20.620 "aliases": [ 00:11:20.620 "d630b2be-90fa-4225-9f8c-1b691ecc0810" 00:11:20.620 ], 00:11:20.620 "product_name": "Raid Volume", 00:11:20.620 "block_size": 512, 00:11:20.620 "num_blocks": 253952, 00:11:20.620 "uuid": "d630b2be-90fa-4225-9f8c-1b691ecc0810", 00:11:20.620 "assigned_rate_limits": { 00:11:20.620 "rw_ios_per_sec": 0, 00:11:20.620 "rw_mbytes_per_sec": 0, 00:11:20.620 "r_mbytes_per_sec": 0, 00:11:20.620 "w_mbytes_per_sec": 0 00:11:20.620 }, 00:11:20.620 "claimed": false, 00:11:20.620 "zoned": false, 00:11:20.620 "supported_io_types": { 00:11:20.620 "read": true, 00:11:20.620 "write": true, 00:11:20.620 "unmap": true, 00:11:20.620 "flush": true, 00:11:20.620 "reset": true, 00:11:20.620 "nvme_admin": false, 00:11:20.620 "nvme_io": false, 00:11:20.620 "nvme_io_md": false, 00:11:20.620 "write_zeroes": true, 00:11:20.620 "zcopy": false, 00:11:20.620 "get_zone_info": false, 00:11:20.620 "zone_management": false, 00:11:20.620 "zone_append": false, 00:11:20.620 "compare": false, 00:11:20.620 "compare_and_write": false, 00:11:20.620 "abort": false, 00:11:20.620 "seek_hole": false, 00:11:20.620 "seek_data": false, 00:11:20.620 "copy": false, 00:11:20.620 "nvme_iov_md": false 00:11:20.620 }, 00:11:20.620 "memory_domains": [ 00:11:20.620 { 00:11:20.620 "dma_device_id": "system", 00:11:20.620 "dma_device_type": 1 00:11:20.620 }, 00:11:20.620 { 00:11:20.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.620 "dma_device_type": 2 00:11:20.620 }, 00:11:20.620 { 00:11:20.620 "dma_device_id": "system", 00:11:20.620 "dma_device_type": 1 00:11:20.620 }, 00:11:20.620 { 00:11:20.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.620 "dma_device_type": 2 00:11:20.620 }, 00:11:20.620 { 00:11:20.620 "dma_device_id": "system", 00:11:20.620 "dma_device_type": 1 00:11:20.620 }, 00:11:20.620 { 00:11:20.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.620 "dma_device_type": 2 00:11:20.620 }, 00:11:20.620 { 00:11:20.620 "dma_device_id": "system", 00:11:20.620 "dma_device_type": 1 00:11:20.620 }, 00:11:20.620 { 00:11:20.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.620 "dma_device_type": 2 00:11:20.620 } 00:11:20.620 ], 00:11:20.620 "driver_specific": { 00:11:20.620 "raid": { 00:11:20.620 "uuid": "d630b2be-90fa-4225-9f8c-1b691ecc0810", 00:11:20.620 "strip_size_kb": 64, 00:11:20.620 "state": "online", 00:11:20.620 "raid_level": "concat", 00:11:20.620 "superblock": true, 00:11:20.620 "num_base_bdevs": 4, 00:11:20.620 "num_base_bdevs_discovered": 4, 00:11:20.620 "num_base_bdevs_operational": 4, 00:11:20.620 "base_bdevs_list": [ 00:11:20.620 { 00:11:20.620 "name": "pt1", 00:11:20.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.620 "is_configured": true, 00:11:20.620 "data_offset": 2048, 00:11:20.620 "data_size": 63488 00:11:20.620 }, 00:11:20.620 { 00:11:20.620 "name": "pt2", 00:11:20.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.620 "is_configured": true, 00:11:20.620 "data_offset": 2048, 00:11:20.620 "data_size": 63488 00:11:20.620 }, 00:11:20.620 { 00:11:20.620 "name": "pt3", 00:11:20.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.620 "is_configured": true, 00:11:20.620 "data_offset": 2048, 00:11:20.620 "data_size": 63488 00:11:20.620 }, 00:11:20.620 { 00:11:20.620 "name": "pt4", 00:11:20.620 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.620 "is_configured": true, 00:11:20.620 "data_offset": 2048, 00:11:20.620 "data_size": 63488 00:11:20.620 } 00:11:20.620 ] 00:11:20.620 } 00:11:20.620 } 00:11:20.620 }' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:20.620 pt2 00:11:20.620 pt3 00:11:20.620 pt4' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.620 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.881 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.881 17:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:20.881 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.881 17:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:20.881 [2024-12-07 17:27:54.057166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d630b2be-90fa-4225-9f8c-1b691ecc0810 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d630b2be-90fa-4225-9f8c-1b691ecc0810 ']' 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 [2024-12-07 17:27:54.088829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.881 [2024-12-07 17:27:54.088852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.881 [2024-12-07 17:27:54.088952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.881 [2024-12-07 17:27:54.089029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.881 [2024-12-07 17:27:54.089045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:20.881 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.882 [2024-12-07 17:27:54.232614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:20.882 [2024-12-07 17:27:54.234759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:20.882 [2024-12-07 17:27:54.234852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:20.882 [2024-12-07 17:27:54.234905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:20.882 [2024-12-07 17:27:54.235006] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:20.882 [2024-12-07 17:27:54.235099] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:20.882 [2024-12-07 17:27:54.235184] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:20.882 [2024-12-07 17:27:54.235266] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:20.882 [2024-12-07 17:27:54.235318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.882 [2024-12-07 17:27:54.235352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:20.882 request: 00:11:20.882 { 00:11:20.882 "name": "raid_bdev1", 00:11:20.882 "raid_level": "concat", 00:11:20.882 "base_bdevs": [ 00:11:20.882 "malloc1", 00:11:20.882 "malloc2", 00:11:20.882 "malloc3", 00:11:20.882 "malloc4" 00:11:20.882 ], 00:11:20.882 "strip_size_kb": 64, 00:11:20.882 "superblock": false, 00:11:20.882 "method": "bdev_raid_create", 00:11:20.882 "req_id": 1 00:11:20.882 } 00:11:20.882 Got JSON-RPC error response 00:11:20.882 response: 00:11:20.882 { 00:11:20.882 "code": -17, 00:11:20.882 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:20.882 } 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.882 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.141 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:21.141 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:21.141 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:21.141 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.141 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.141 [2024-12-07 17:27:54.292497] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:21.141 [2024-12-07 17:27:54.292603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.141 [2024-12-07 17:27:54.292642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:21.141 [2024-12-07 17:27:54.292676] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.141 [2024-12-07 17:27:54.295288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.141 [2024-12-07 17:27:54.295368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:21.141 [2024-12-07 17:27:54.295494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:21.142 [2024-12-07 17:27:54.295600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:21.142 pt1 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.142 "name": "raid_bdev1", 00:11:21.142 "uuid": "d630b2be-90fa-4225-9f8c-1b691ecc0810", 00:11:21.142 "strip_size_kb": 64, 00:11:21.142 "state": "configuring", 00:11:21.142 "raid_level": "concat", 00:11:21.142 "superblock": true, 00:11:21.142 "num_base_bdevs": 4, 00:11:21.142 "num_base_bdevs_discovered": 1, 00:11:21.142 "num_base_bdevs_operational": 4, 00:11:21.142 "base_bdevs_list": [ 00:11:21.142 { 00:11:21.142 "name": "pt1", 00:11:21.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.142 "is_configured": true, 00:11:21.142 "data_offset": 2048, 00:11:21.142 "data_size": 63488 00:11:21.142 }, 00:11:21.142 { 00:11:21.142 "name": null, 00:11:21.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.142 "is_configured": false, 00:11:21.142 "data_offset": 2048, 00:11:21.142 "data_size": 63488 00:11:21.142 }, 00:11:21.142 { 00:11:21.142 "name": null, 00:11:21.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.142 "is_configured": false, 00:11:21.142 "data_offset": 2048, 00:11:21.142 "data_size": 63488 00:11:21.142 }, 00:11:21.142 { 00:11:21.142 "name": null, 00:11:21.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.142 "is_configured": false, 00:11:21.142 "data_offset": 2048, 00:11:21.142 "data_size": 63488 00:11:21.142 } 00:11:21.142 ] 00:11:21.142 }' 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.142 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.402 [2024-12-07 17:27:54.699873] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:21.402 [2024-12-07 17:27:54.700069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.402 [2024-12-07 17:27:54.700097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:21.402 [2024-12-07 17:27:54.700110] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.402 [2024-12-07 17:27:54.700641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.402 [2024-12-07 17:27:54.700664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:21.402 [2024-12-07 17:27:54.700761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:21.402 [2024-12-07 17:27:54.700788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:21.402 pt2 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.402 [2024-12-07 17:27:54.711842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.402 "name": "raid_bdev1", 00:11:21.402 "uuid": "d630b2be-90fa-4225-9f8c-1b691ecc0810", 00:11:21.402 "strip_size_kb": 64, 00:11:21.402 "state": "configuring", 00:11:21.402 "raid_level": "concat", 00:11:21.402 "superblock": true, 00:11:21.402 "num_base_bdevs": 4, 00:11:21.402 "num_base_bdevs_discovered": 1, 00:11:21.402 "num_base_bdevs_operational": 4, 00:11:21.402 "base_bdevs_list": [ 00:11:21.402 { 00:11:21.402 "name": "pt1", 00:11:21.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.402 "is_configured": true, 00:11:21.402 "data_offset": 2048, 00:11:21.402 "data_size": 63488 00:11:21.402 }, 00:11:21.402 { 00:11:21.402 "name": null, 00:11:21.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.402 "is_configured": false, 00:11:21.402 "data_offset": 0, 00:11:21.402 "data_size": 63488 00:11:21.402 }, 00:11:21.402 { 00:11:21.402 "name": null, 00:11:21.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.402 "is_configured": false, 00:11:21.402 "data_offset": 2048, 00:11:21.402 "data_size": 63488 00:11:21.402 }, 00:11:21.402 { 00:11:21.402 "name": null, 00:11:21.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.402 "is_configured": false, 00:11:21.402 "data_offset": 2048, 00:11:21.402 "data_size": 63488 00:11:21.402 } 00:11:21.402 ] 00:11:21.402 }' 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.402 17:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.976 [2024-12-07 17:27:55.195081] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:21.976 [2024-12-07 17:27:55.195293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.976 [2024-12-07 17:27:55.195333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:21.976 [2024-12-07 17:27:55.195366] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.976 [2024-12-07 17:27:55.195913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.976 [2024-12-07 17:27:55.195988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:21.976 [2024-12-07 17:27:55.196116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:21.976 [2024-12-07 17:27:55.196166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:21.976 pt2 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.976 [2024-12-07 17:27:55.206991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:21.976 [2024-12-07 17:27:55.207096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.976 [2024-12-07 17:27:55.207137] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:21.976 [2024-12-07 17:27:55.207166] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.976 [2024-12-07 17:27:55.207586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.976 [2024-12-07 17:27:55.207640] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:21.976 [2024-12-07 17:27:55.207735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:21.976 [2024-12-07 17:27:55.207788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:21.976 pt3 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.976 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.977 [2024-12-07 17:27:55.218933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:21.977 [2024-12-07 17:27:55.219026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.977 [2024-12-07 17:27:55.219047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:21.977 [2024-12-07 17:27:55.219056] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.977 [2024-12-07 17:27:55.219450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.977 [2024-12-07 17:27:55.219466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:21.977 [2024-12-07 17:27:55.219530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:21.977 [2024-12-07 17:27:55.219551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:21.977 [2024-12-07 17:27:55.219684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.977 [2024-12-07 17:27:55.219693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:21.977 [2024-12-07 17:27:55.219956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:21.977 [2024-12-07 17:27:55.220120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.977 [2024-12-07 17:27:55.220141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:21.977 [2024-12-07 17:27:55.220285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.977 pt4 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.977 "name": "raid_bdev1", 00:11:21.977 "uuid": "d630b2be-90fa-4225-9f8c-1b691ecc0810", 00:11:21.977 "strip_size_kb": 64, 00:11:21.977 "state": "online", 00:11:21.977 "raid_level": "concat", 00:11:21.977 "superblock": true, 00:11:21.977 "num_base_bdevs": 4, 00:11:21.977 "num_base_bdevs_discovered": 4, 00:11:21.977 "num_base_bdevs_operational": 4, 00:11:21.977 "base_bdevs_list": [ 00:11:21.977 { 00:11:21.977 "name": "pt1", 00:11:21.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.977 "is_configured": true, 00:11:21.977 "data_offset": 2048, 00:11:21.977 "data_size": 63488 00:11:21.977 }, 00:11:21.977 { 00:11:21.977 "name": "pt2", 00:11:21.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.977 "is_configured": true, 00:11:21.977 "data_offset": 2048, 00:11:21.977 "data_size": 63488 00:11:21.977 }, 00:11:21.977 { 00:11:21.977 "name": "pt3", 00:11:21.977 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.977 "is_configured": true, 00:11:21.977 "data_offset": 2048, 00:11:21.977 "data_size": 63488 00:11:21.977 }, 00:11:21.977 { 00:11:21.977 "name": "pt4", 00:11:21.977 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.977 "is_configured": true, 00:11:21.977 "data_offset": 2048, 00:11:21.977 "data_size": 63488 00:11:21.977 } 00:11:21.977 ] 00:11:21.977 }' 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.977 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.546 [2024-12-07 17:27:55.678553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.546 "name": "raid_bdev1", 00:11:22.546 "aliases": [ 00:11:22.546 "d630b2be-90fa-4225-9f8c-1b691ecc0810" 00:11:22.546 ], 00:11:22.546 "product_name": "Raid Volume", 00:11:22.546 "block_size": 512, 00:11:22.546 "num_blocks": 253952, 00:11:22.546 "uuid": "d630b2be-90fa-4225-9f8c-1b691ecc0810", 00:11:22.546 "assigned_rate_limits": { 00:11:22.546 "rw_ios_per_sec": 0, 00:11:22.546 "rw_mbytes_per_sec": 0, 00:11:22.546 "r_mbytes_per_sec": 0, 00:11:22.546 "w_mbytes_per_sec": 0 00:11:22.546 }, 00:11:22.546 "claimed": false, 00:11:22.546 "zoned": false, 00:11:22.546 "supported_io_types": { 00:11:22.546 "read": true, 00:11:22.546 "write": true, 00:11:22.546 "unmap": true, 00:11:22.546 "flush": true, 00:11:22.546 "reset": true, 00:11:22.546 "nvme_admin": false, 00:11:22.546 "nvme_io": false, 00:11:22.546 "nvme_io_md": false, 00:11:22.546 "write_zeroes": true, 00:11:22.546 "zcopy": false, 00:11:22.546 "get_zone_info": false, 00:11:22.546 "zone_management": false, 00:11:22.546 "zone_append": false, 00:11:22.546 "compare": false, 00:11:22.546 "compare_and_write": false, 00:11:22.546 "abort": false, 00:11:22.546 "seek_hole": false, 00:11:22.546 "seek_data": false, 00:11:22.546 "copy": false, 00:11:22.546 "nvme_iov_md": false 00:11:22.546 }, 00:11:22.546 "memory_domains": [ 00:11:22.546 { 00:11:22.546 "dma_device_id": "system", 00:11:22.546 "dma_device_type": 1 00:11:22.546 }, 00:11:22.546 { 00:11:22.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.546 "dma_device_type": 2 00:11:22.546 }, 00:11:22.546 { 00:11:22.546 "dma_device_id": "system", 00:11:22.546 "dma_device_type": 1 00:11:22.546 }, 00:11:22.546 { 00:11:22.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.546 "dma_device_type": 2 00:11:22.546 }, 00:11:22.546 { 00:11:22.546 "dma_device_id": "system", 00:11:22.546 "dma_device_type": 1 00:11:22.546 }, 00:11:22.546 { 00:11:22.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.546 "dma_device_type": 2 00:11:22.546 }, 00:11:22.546 { 00:11:22.546 "dma_device_id": "system", 00:11:22.546 "dma_device_type": 1 00:11:22.546 }, 00:11:22.546 { 00:11:22.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.546 "dma_device_type": 2 00:11:22.546 } 00:11:22.546 ], 00:11:22.546 "driver_specific": { 00:11:22.546 "raid": { 00:11:22.546 "uuid": "d630b2be-90fa-4225-9f8c-1b691ecc0810", 00:11:22.546 "strip_size_kb": 64, 00:11:22.546 "state": "online", 00:11:22.546 "raid_level": "concat", 00:11:22.546 "superblock": true, 00:11:22.546 "num_base_bdevs": 4, 00:11:22.546 "num_base_bdevs_discovered": 4, 00:11:22.546 "num_base_bdevs_operational": 4, 00:11:22.546 "base_bdevs_list": [ 00:11:22.546 { 00:11:22.546 "name": "pt1", 00:11:22.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.546 "is_configured": true, 00:11:22.546 "data_offset": 2048, 00:11:22.546 "data_size": 63488 00:11:22.546 }, 00:11:22.546 { 00:11:22.546 "name": "pt2", 00:11:22.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.546 "is_configured": true, 00:11:22.546 "data_offset": 2048, 00:11:22.546 "data_size": 63488 00:11:22.546 }, 00:11:22.546 { 00:11:22.546 "name": "pt3", 00:11:22.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.546 "is_configured": true, 00:11:22.546 "data_offset": 2048, 00:11:22.546 "data_size": 63488 00:11:22.546 }, 00:11:22.546 { 00:11:22.546 "name": "pt4", 00:11:22.546 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.546 "is_configured": true, 00:11:22.546 "data_offset": 2048, 00:11:22.546 "data_size": 63488 00:11:22.546 } 00:11:22.546 ] 00:11:22.546 } 00:11:22.546 } 00:11:22.546 }' 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:22.546 pt2 00:11:22.546 pt3 00:11:22.546 pt4' 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.546 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.547 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.805 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.805 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.805 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.805 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:22.805 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.805 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.805 17:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:22.805 [2024-12-07 17:27:55.978201] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.805 17:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d630b2be-90fa-4225-9f8c-1b691ecc0810 '!=' d630b2be-90fa-4225-9f8c-1b691ecc0810 ']' 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72641 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72641 ']' 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72641 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72641 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72641' 00:11:22.805 killing process with pid 72641 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72641 00:11:22.805 [2024-12-07 17:27:56.067955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.805 17:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72641 00:11:22.805 [2024-12-07 17:27:56.068149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.805 [2024-12-07 17:27:56.068242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.805 [2024-12-07 17:27:56.068258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:23.373 [2024-12-07 17:27:56.511554] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.753 17:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:24.753 00:11:24.753 real 0m5.667s 00:11:24.753 user 0m7.929s 00:11:24.753 sys 0m1.024s 00:11:24.753 17:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.753 17:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.753 ************************************ 00:11:24.753 END TEST raid_superblock_test 00:11:24.753 ************************************ 00:11:24.753 17:27:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:24.753 17:27:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:24.753 17:27:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.753 17:27:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.753 ************************************ 00:11:24.753 START TEST raid_read_error_test 00:11:24.753 ************************************ 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gxKf76of7I 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72906 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72906 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72906 ']' 00:11:24.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.753 17:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.754 17:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.754 17:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.754 [2024-12-07 17:27:57.947788] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:24.754 [2024-12-07 17:27:57.947924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72906 ] 00:11:24.754 [2024-12-07 17:27:58.121060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.013 [2024-12-07 17:27:58.260868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.274 [2024-12-07 17:27:58.498012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.274 [2024-12-07 17:27:58.498059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.534 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.534 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:25.534 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.534 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:25.534 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.535 BaseBdev1_malloc 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.535 true 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.535 [2024-12-07 17:27:58.833469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:25.535 [2024-12-07 17:27:58.833536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.535 [2024-12-07 17:27:58.833558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:25.535 [2024-12-07 17:27:58.833570] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.535 [2024-12-07 17:27:58.835951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.535 [2024-12-07 17:27:58.836107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:25.535 BaseBdev1 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.535 BaseBdev2_malloc 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.535 true 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.535 [2024-12-07 17:27:58.906915] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:25.535 [2024-12-07 17:27:58.906989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.535 [2024-12-07 17:27:58.907007] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:25.535 [2024-12-07 17:27:58.907019] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.535 [2024-12-07 17:27:58.909356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.535 [2024-12-07 17:27:58.909470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:25.535 BaseBdev2 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.535 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.796 BaseBdev3_malloc 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.796 true 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.796 [2024-12-07 17:27:58.993114] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:25.796 [2024-12-07 17:27:58.993169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.796 [2024-12-07 17:27:58.993187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:25.796 [2024-12-07 17:27:58.993198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.796 [2024-12-07 17:27:58.995543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.796 [2024-12-07 17:27:58.995659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:25.796 BaseBdev3 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.796 17:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.796 BaseBdev4_malloc 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.796 true 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.796 [2024-12-07 17:27:59.066203] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:25.796 [2024-12-07 17:27:59.066261] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.796 [2024-12-07 17:27:59.066279] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:25.796 [2024-12-07 17:27:59.066291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.796 [2024-12-07 17:27:59.068649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.796 [2024-12-07 17:27:59.068771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:25.796 BaseBdev4 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.796 [2024-12-07 17:27:59.078274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.796 [2024-12-07 17:27:59.080378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.796 [2024-12-07 17:27:59.080453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.796 [2024-12-07 17:27:59.080513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.796 [2024-12-07 17:27:59.080739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:25.796 [2024-12-07 17:27:59.080755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:25.796 [2024-12-07 17:27:59.081017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:25.796 [2024-12-07 17:27:59.081186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:25.796 [2024-12-07 17:27:59.081198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:25.796 [2024-12-07 17:27:59.081340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.796 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.797 "name": "raid_bdev1", 00:11:25.797 "uuid": "ee91a45a-f692-4358-900f-bd6861022908", 00:11:25.797 "strip_size_kb": 64, 00:11:25.797 "state": "online", 00:11:25.797 "raid_level": "concat", 00:11:25.797 "superblock": true, 00:11:25.797 "num_base_bdevs": 4, 00:11:25.797 "num_base_bdevs_discovered": 4, 00:11:25.797 "num_base_bdevs_operational": 4, 00:11:25.797 "base_bdevs_list": [ 00:11:25.797 { 00:11:25.797 "name": "BaseBdev1", 00:11:25.797 "uuid": "e2a18b74-188f-5648-a3e4-e925e930d0a6", 00:11:25.797 "is_configured": true, 00:11:25.797 "data_offset": 2048, 00:11:25.797 "data_size": 63488 00:11:25.797 }, 00:11:25.797 { 00:11:25.797 "name": "BaseBdev2", 00:11:25.797 "uuid": "eae3dcb1-b395-59ca-920d-26c7d9d43021", 00:11:25.797 "is_configured": true, 00:11:25.797 "data_offset": 2048, 00:11:25.797 "data_size": 63488 00:11:25.797 }, 00:11:25.797 { 00:11:25.797 "name": "BaseBdev3", 00:11:25.797 "uuid": "b72c8e06-c12a-52d5-9dfd-95f3ad596ebe", 00:11:25.797 "is_configured": true, 00:11:25.797 "data_offset": 2048, 00:11:25.797 "data_size": 63488 00:11:25.797 }, 00:11:25.797 { 00:11:25.797 "name": "BaseBdev4", 00:11:25.797 "uuid": "a1f38c35-99d6-5e2c-8368-51facfab8c42", 00:11:25.797 "is_configured": true, 00:11:25.797 "data_offset": 2048, 00:11:25.797 "data_size": 63488 00:11:25.797 } 00:11:25.797 ] 00:11:25.797 }' 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.797 17:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.366 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:26.366 17:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:26.366 [2024-12-07 17:27:59.634957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.304 "name": "raid_bdev1", 00:11:27.304 "uuid": "ee91a45a-f692-4358-900f-bd6861022908", 00:11:27.304 "strip_size_kb": 64, 00:11:27.304 "state": "online", 00:11:27.304 "raid_level": "concat", 00:11:27.304 "superblock": true, 00:11:27.304 "num_base_bdevs": 4, 00:11:27.304 "num_base_bdevs_discovered": 4, 00:11:27.304 "num_base_bdevs_operational": 4, 00:11:27.304 "base_bdevs_list": [ 00:11:27.304 { 00:11:27.304 "name": "BaseBdev1", 00:11:27.304 "uuid": "e2a18b74-188f-5648-a3e4-e925e930d0a6", 00:11:27.304 "is_configured": true, 00:11:27.304 "data_offset": 2048, 00:11:27.304 "data_size": 63488 00:11:27.304 }, 00:11:27.304 { 00:11:27.304 "name": "BaseBdev2", 00:11:27.304 "uuid": "eae3dcb1-b395-59ca-920d-26c7d9d43021", 00:11:27.304 "is_configured": true, 00:11:27.304 "data_offset": 2048, 00:11:27.304 "data_size": 63488 00:11:27.304 }, 00:11:27.304 { 00:11:27.304 "name": "BaseBdev3", 00:11:27.304 "uuid": "b72c8e06-c12a-52d5-9dfd-95f3ad596ebe", 00:11:27.304 "is_configured": true, 00:11:27.304 "data_offset": 2048, 00:11:27.304 "data_size": 63488 00:11:27.304 }, 00:11:27.304 { 00:11:27.304 "name": "BaseBdev4", 00:11:27.304 "uuid": "a1f38c35-99d6-5e2c-8368-51facfab8c42", 00:11:27.304 "is_configured": true, 00:11:27.304 "data_offset": 2048, 00:11:27.304 "data_size": 63488 00:11:27.304 } 00:11:27.304 ] 00:11:27.304 }' 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.304 17:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.874 [2024-12-07 17:28:01.032244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.874 [2024-12-07 17:28:01.032402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.874 [2024-12-07 17:28:01.035339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.874 [2024-12-07 17:28:01.035455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.874 [2024-12-07 17:28:01.035525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.874 [2024-12-07 17:28:01.035582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:27.874 { 00:11:27.874 "results": [ 00:11:27.874 { 00:11:27.874 "job": "raid_bdev1", 00:11:27.874 "core_mask": "0x1", 00:11:27.874 "workload": "randrw", 00:11:27.874 "percentage": 50, 00:11:27.874 "status": "finished", 00:11:27.874 "queue_depth": 1, 00:11:27.874 "io_size": 131072, 00:11:27.874 "runtime": 1.397878, 00:11:27.874 "iops": 13318.758861646009, 00:11:27.874 "mibps": 1664.844857705751, 00:11:27.874 "io_failed": 1, 00:11:27.874 "io_timeout": 0, 00:11:27.874 "avg_latency_us": 105.51260202577495, 00:11:27.874 "min_latency_us": 26.270742358078603, 00:11:27.874 "max_latency_us": 1380.8349344978167 00:11:27.874 } 00:11:27.874 ], 00:11:27.874 "core_count": 1 00:11:27.874 } 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72906 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72906 ']' 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72906 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72906 00:11:27.874 killing process with pid 72906 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72906' 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72906 00:11:27.874 [2024-12-07 17:28:01.081113] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.874 17:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72906 00:11:28.132 [2024-12-07 17:28:01.432928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.510 17:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gxKf76of7I 00:11:29.510 17:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:29.510 17:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:29.510 17:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:29.510 17:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:29.510 ************************************ 00:11:29.510 END TEST raid_read_error_test 00:11:29.510 ************************************ 00:11:29.510 17:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.510 17:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:29.510 17:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:29.510 00:11:29.510 real 0m4.916s 00:11:29.510 user 0m5.671s 00:11:29.510 sys 0m0.690s 00:11:29.510 17:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.510 17:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.510 17:28:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:29.510 17:28:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.510 17:28:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.510 17:28:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.510 ************************************ 00:11:29.510 START TEST raid_write_error_test 00:11:29.510 ************************************ 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Y3vJFINNmW 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73054 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73054 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73054 ']' 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.510 17:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.769 [2024-12-07 17:28:02.933814] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:29.769 [2024-12-07 17:28:02.934063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73054 ] 00:11:29.769 [2024-12-07 17:28:03.112595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.029 [2024-12-07 17:28:03.248099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.291 [2024-12-07 17:28:03.480676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.291 [2024-12-07 17:28:03.480851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.552 BaseBdev1_malloc 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.552 true 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.552 [2024-12-07 17:28:03.840611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:30.552 [2024-12-07 17:28:03.840680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.552 [2024-12-07 17:28:03.840701] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:30.552 [2024-12-07 17:28:03.840712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.552 [2024-12-07 17:28:03.843011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.552 [2024-12-07 17:28:03.843152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.552 BaseBdev1 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.552 BaseBdev2_malloc 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.552 true 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.552 [2024-12-07 17:28:03.914065] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:30.552 [2024-12-07 17:28:03.914211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.552 [2024-12-07 17:28:03.914233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:30.552 [2024-12-07 17:28:03.914246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.552 [2024-12-07 17:28:03.916706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.552 [2024-12-07 17:28:03.916748] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.552 BaseBdev2 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.552 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.813 BaseBdev3_malloc 00:11:30.813 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.813 17:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:30.813 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.813 17:28:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.813 true 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.813 [2024-12-07 17:28:04.016112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:30.813 [2024-12-07 17:28:04.016171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.813 [2024-12-07 17:28:04.016189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:30.813 [2024-12-07 17:28:04.016200] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.813 [2024-12-07 17:28:04.018544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.813 [2024-12-07 17:28:04.018663] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:30.813 BaseBdev3 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.813 BaseBdev4_malloc 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.813 true 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.813 [2024-12-07 17:28:04.090713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:30.813 [2024-12-07 17:28:04.090833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.813 [2024-12-07 17:28:04.090854] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.813 [2024-12-07 17:28:04.090865] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.813 [2024-12-07 17:28:04.093213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.813 [2024-12-07 17:28:04.093253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:30.813 BaseBdev4 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.813 [2024-12-07 17:28:04.102761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.813 [2024-12-07 17:28:04.104792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.813 [2024-12-07 17:28:04.104866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.813 [2024-12-07 17:28:04.104922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.813 [2024-12-07 17:28:04.105184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:30.813 [2024-12-07 17:28:04.105207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:30.813 [2024-12-07 17:28:04.105450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:30.813 [2024-12-07 17:28:04.105616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:30.813 [2024-12-07 17:28:04.105627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:30.813 [2024-12-07 17:28:04.105793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.813 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.814 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.814 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.814 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.814 "name": "raid_bdev1", 00:11:30.814 "uuid": "3d7ef603-3a55-42cf-a75a-45cad8a97e12", 00:11:30.814 "strip_size_kb": 64, 00:11:30.814 "state": "online", 00:11:30.814 "raid_level": "concat", 00:11:30.814 "superblock": true, 00:11:30.814 "num_base_bdevs": 4, 00:11:30.814 "num_base_bdevs_discovered": 4, 00:11:30.814 "num_base_bdevs_operational": 4, 00:11:30.814 "base_bdevs_list": [ 00:11:30.814 { 00:11:30.814 "name": "BaseBdev1", 00:11:30.814 "uuid": "1a951f5a-5109-587b-b10e-7367cc2559b8", 00:11:30.814 "is_configured": true, 00:11:30.814 "data_offset": 2048, 00:11:30.814 "data_size": 63488 00:11:30.814 }, 00:11:30.814 { 00:11:30.814 "name": "BaseBdev2", 00:11:30.814 "uuid": "0a545470-d681-5a85-9b65-28c1c0db46e5", 00:11:30.814 "is_configured": true, 00:11:30.814 "data_offset": 2048, 00:11:30.814 "data_size": 63488 00:11:30.814 }, 00:11:30.814 { 00:11:30.814 "name": "BaseBdev3", 00:11:30.814 "uuid": "cb1b2e4b-9875-5fde-bd15-b14d97d72dd5", 00:11:30.814 "is_configured": true, 00:11:30.814 "data_offset": 2048, 00:11:30.814 "data_size": 63488 00:11:30.814 }, 00:11:30.814 { 00:11:30.814 "name": "BaseBdev4", 00:11:30.814 "uuid": "d1bf79a9-abf5-5acb-91b9-183c1cc812b7", 00:11:30.814 "is_configured": true, 00:11:30.814 "data_offset": 2048, 00:11:30.814 "data_size": 63488 00:11:30.814 } 00:11:30.814 ] 00:11:30.814 }' 00:11:30.814 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.814 17:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.383 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:31.383 17:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:31.383 [2024-12-07 17:28:04.667358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.319 "name": "raid_bdev1", 00:11:32.319 "uuid": "3d7ef603-3a55-42cf-a75a-45cad8a97e12", 00:11:32.319 "strip_size_kb": 64, 00:11:32.319 "state": "online", 00:11:32.319 "raid_level": "concat", 00:11:32.319 "superblock": true, 00:11:32.319 "num_base_bdevs": 4, 00:11:32.319 "num_base_bdevs_discovered": 4, 00:11:32.319 "num_base_bdevs_operational": 4, 00:11:32.319 "base_bdevs_list": [ 00:11:32.319 { 00:11:32.319 "name": "BaseBdev1", 00:11:32.319 "uuid": "1a951f5a-5109-587b-b10e-7367cc2559b8", 00:11:32.319 "is_configured": true, 00:11:32.319 "data_offset": 2048, 00:11:32.319 "data_size": 63488 00:11:32.319 }, 00:11:32.319 { 00:11:32.319 "name": "BaseBdev2", 00:11:32.319 "uuid": "0a545470-d681-5a85-9b65-28c1c0db46e5", 00:11:32.319 "is_configured": true, 00:11:32.319 "data_offset": 2048, 00:11:32.319 "data_size": 63488 00:11:32.319 }, 00:11:32.319 { 00:11:32.319 "name": "BaseBdev3", 00:11:32.319 "uuid": "cb1b2e4b-9875-5fde-bd15-b14d97d72dd5", 00:11:32.319 "is_configured": true, 00:11:32.319 "data_offset": 2048, 00:11:32.319 "data_size": 63488 00:11:32.319 }, 00:11:32.319 { 00:11:32.319 "name": "BaseBdev4", 00:11:32.319 "uuid": "d1bf79a9-abf5-5acb-91b9-183c1cc812b7", 00:11:32.319 "is_configured": true, 00:11:32.319 "data_offset": 2048, 00:11:32.319 "data_size": 63488 00:11:32.319 } 00:11:32.319 ] 00:11:32.319 }' 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.319 17:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.888 [2024-12-07 17:28:06.024354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.888 [2024-12-07 17:28:06.024406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.888 [2024-12-07 17:28:06.027121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.888 [2024-12-07 17:28:06.027215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.888 [2024-12-07 17:28:06.027263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.888 [2024-12-07 17:28:06.027280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:32.888 { 00:11:32.888 "results": [ 00:11:32.888 { 00:11:32.888 "job": "raid_bdev1", 00:11:32.888 "core_mask": "0x1", 00:11:32.888 "workload": "randrw", 00:11:32.888 "percentage": 50, 00:11:32.888 "status": "finished", 00:11:32.888 "queue_depth": 1, 00:11:32.888 "io_size": 131072, 00:11:32.888 "runtime": 1.357432, 00:11:32.888 "iops": 13310.427336323293, 00:11:32.888 "mibps": 1663.8034170404117, 00:11:32.888 "io_failed": 1, 00:11:32.888 "io_timeout": 0, 00:11:32.888 "avg_latency_us": 105.67500563705214, 00:11:32.888 "min_latency_us": 25.4882096069869, 00:11:32.888 "max_latency_us": 1466.6899563318777 00:11:32.888 } 00:11:32.888 ], 00:11:32.888 "core_count": 1 00:11:32.888 } 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73054 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73054 ']' 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73054 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73054 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73054' 00:11:32.888 killing process with pid 73054 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73054 00:11:32.888 [2024-12-07 17:28:06.058324] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.888 17:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73054 00:11:33.149 [2024-12-07 17:28:06.420442] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.528 17:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Y3vJFINNmW 00:11:34.528 17:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:34.528 17:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:34.528 17:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:34.528 17:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:34.528 17:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.528 17:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:34.528 17:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:34.528 ************************************ 00:11:34.528 END TEST raid_write_error_test 00:11:34.528 ************************************ 00:11:34.528 00:11:34.528 real 0m4.902s 00:11:34.528 user 0m5.630s 00:11:34.528 sys 0m0.691s 00:11:34.528 17:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.528 17:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.528 17:28:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:34.528 17:28:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:34.528 17:28:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:34.528 17:28:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.528 17:28:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.528 ************************************ 00:11:34.528 START TEST raid_state_function_test 00:11:34.528 ************************************ 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:34.529 Process raid pid: 73203 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73203 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73203' 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73203 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73203 ']' 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.529 17:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.529 [2024-12-07 17:28:07.906430] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:34.529 [2024-12-07 17:28:07.906559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.788 [2024-12-07 17:28:08.076178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.048 [2024-12-07 17:28:08.212823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.307 [2024-12-07 17:28:08.456342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.307 [2024-12-07 17:28:08.456386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.567 [2024-12-07 17:28:08.753458] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.567 [2024-12-07 17:28:08.753534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.567 [2024-12-07 17:28:08.753545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.567 [2024-12-07 17:28:08.753556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.567 [2024-12-07 17:28:08.753562] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.567 [2024-12-07 17:28:08.753573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.567 [2024-12-07 17:28:08.753585] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:35.567 [2024-12-07 17:28:08.753595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.567 "name": "Existed_Raid", 00:11:35.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.567 "strip_size_kb": 0, 00:11:35.567 "state": "configuring", 00:11:35.567 "raid_level": "raid1", 00:11:35.567 "superblock": false, 00:11:35.567 "num_base_bdevs": 4, 00:11:35.567 "num_base_bdevs_discovered": 0, 00:11:35.567 "num_base_bdevs_operational": 4, 00:11:35.567 "base_bdevs_list": [ 00:11:35.567 { 00:11:35.567 "name": "BaseBdev1", 00:11:35.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.567 "is_configured": false, 00:11:35.567 "data_offset": 0, 00:11:35.567 "data_size": 0 00:11:35.567 }, 00:11:35.567 { 00:11:35.567 "name": "BaseBdev2", 00:11:35.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.567 "is_configured": false, 00:11:35.567 "data_offset": 0, 00:11:35.567 "data_size": 0 00:11:35.567 }, 00:11:35.567 { 00:11:35.567 "name": "BaseBdev3", 00:11:35.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.567 "is_configured": false, 00:11:35.567 "data_offset": 0, 00:11:35.567 "data_size": 0 00:11:35.567 }, 00:11:35.567 { 00:11:35.567 "name": "BaseBdev4", 00:11:35.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.567 "is_configured": false, 00:11:35.567 "data_offset": 0, 00:11:35.567 "data_size": 0 00:11:35.567 } 00:11:35.567 ] 00:11:35.567 }' 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.567 17:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.827 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.827 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.827 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.827 [2024-12-07 17:28:09.192666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.827 [2024-12-07 17:28:09.192807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:35.827 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.827 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.827 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.827 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.827 [2024-12-07 17:28:09.204623] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.827 [2024-12-07 17:28:09.204722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.827 [2024-12-07 17:28:09.204750] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.827 [2024-12-07 17:28:09.204773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.827 [2024-12-07 17:28:09.204791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.827 [2024-12-07 17:28:09.204812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.827 [2024-12-07 17:28:09.204828] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:35.827 [2024-12-07 17:28:09.204849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.087 [2024-12-07 17:28:09.259751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.087 BaseBdev1 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.087 [ 00:11:36.087 { 00:11:36.087 "name": "BaseBdev1", 00:11:36.087 "aliases": [ 00:11:36.087 "3cc8947f-cacb-41ab-bf44-8cfe722e3e43" 00:11:36.087 ], 00:11:36.087 "product_name": "Malloc disk", 00:11:36.087 "block_size": 512, 00:11:36.087 "num_blocks": 65536, 00:11:36.087 "uuid": "3cc8947f-cacb-41ab-bf44-8cfe722e3e43", 00:11:36.087 "assigned_rate_limits": { 00:11:36.087 "rw_ios_per_sec": 0, 00:11:36.087 "rw_mbytes_per_sec": 0, 00:11:36.087 "r_mbytes_per_sec": 0, 00:11:36.087 "w_mbytes_per_sec": 0 00:11:36.087 }, 00:11:36.087 "claimed": true, 00:11:36.087 "claim_type": "exclusive_write", 00:11:36.087 "zoned": false, 00:11:36.087 "supported_io_types": { 00:11:36.087 "read": true, 00:11:36.087 "write": true, 00:11:36.087 "unmap": true, 00:11:36.087 "flush": true, 00:11:36.087 "reset": true, 00:11:36.087 "nvme_admin": false, 00:11:36.087 "nvme_io": false, 00:11:36.087 "nvme_io_md": false, 00:11:36.087 "write_zeroes": true, 00:11:36.087 "zcopy": true, 00:11:36.087 "get_zone_info": false, 00:11:36.087 "zone_management": false, 00:11:36.087 "zone_append": false, 00:11:36.087 "compare": false, 00:11:36.087 "compare_and_write": false, 00:11:36.087 "abort": true, 00:11:36.087 "seek_hole": false, 00:11:36.087 "seek_data": false, 00:11:36.087 "copy": true, 00:11:36.087 "nvme_iov_md": false 00:11:36.087 }, 00:11:36.087 "memory_domains": [ 00:11:36.087 { 00:11:36.087 "dma_device_id": "system", 00:11:36.087 "dma_device_type": 1 00:11:36.087 }, 00:11:36.087 { 00:11:36.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.087 "dma_device_type": 2 00:11:36.087 } 00:11:36.087 ], 00:11:36.087 "driver_specific": {} 00:11:36.087 } 00:11:36.087 ] 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.087 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.088 "name": "Existed_Raid", 00:11:36.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.088 "strip_size_kb": 0, 00:11:36.088 "state": "configuring", 00:11:36.088 "raid_level": "raid1", 00:11:36.088 "superblock": false, 00:11:36.088 "num_base_bdevs": 4, 00:11:36.088 "num_base_bdevs_discovered": 1, 00:11:36.088 "num_base_bdevs_operational": 4, 00:11:36.088 "base_bdevs_list": [ 00:11:36.088 { 00:11:36.088 "name": "BaseBdev1", 00:11:36.088 "uuid": "3cc8947f-cacb-41ab-bf44-8cfe722e3e43", 00:11:36.088 "is_configured": true, 00:11:36.088 "data_offset": 0, 00:11:36.088 "data_size": 65536 00:11:36.088 }, 00:11:36.088 { 00:11:36.088 "name": "BaseBdev2", 00:11:36.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.088 "is_configured": false, 00:11:36.088 "data_offset": 0, 00:11:36.088 "data_size": 0 00:11:36.088 }, 00:11:36.088 { 00:11:36.088 "name": "BaseBdev3", 00:11:36.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.088 "is_configured": false, 00:11:36.088 "data_offset": 0, 00:11:36.088 "data_size": 0 00:11:36.088 }, 00:11:36.088 { 00:11:36.088 "name": "BaseBdev4", 00:11:36.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.088 "is_configured": false, 00:11:36.088 "data_offset": 0, 00:11:36.088 "data_size": 0 00:11:36.088 } 00:11:36.088 ] 00:11:36.088 }' 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.088 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.657 [2024-12-07 17:28:09.755026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.657 [2024-12-07 17:28:09.755190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.657 [2024-12-07 17:28:09.767029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.657 [2024-12-07 17:28:09.769171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.657 [2024-12-07 17:28:09.769249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.657 [2024-12-07 17:28:09.769277] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.657 [2024-12-07 17:28:09.769302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.657 [2024-12-07 17:28:09.769320] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:36.657 [2024-12-07 17:28:09.769340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.657 "name": "Existed_Raid", 00:11:36.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.657 "strip_size_kb": 0, 00:11:36.657 "state": "configuring", 00:11:36.657 "raid_level": "raid1", 00:11:36.657 "superblock": false, 00:11:36.657 "num_base_bdevs": 4, 00:11:36.657 "num_base_bdevs_discovered": 1, 00:11:36.657 "num_base_bdevs_operational": 4, 00:11:36.657 "base_bdevs_list": [ 00:11:36.657 { 00:11:36.657 "name": "BaseBdev1", 00:11:36.657 "uuid": "3cc8947f-cacb-41ab-bf44-8cfe722e3e43", 00:11:36.657 "is_configured": true, 00:11:36.657 "data_offset": 0, 00:11:36.657 "data_size": 65536 00:11:36.657 }, 00:11:36.657 { 00:11:36.657 "name": "BaseBdev2", 00:11:36.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.657 "is_configured": false, 00:11:36.657 "data_offset": 0, 00:11:36.657 "data_size": 0 00:11:36.657 }, 00:11:36.657 { 00:11:36.657 "name": "BaseBdev3", 00:11:36.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.657 "is_configured": false, 00:11:36.657 "data_offset": 0, 00:11:36.657 "data_size": 0 00:11:36.657 }, 00:11:36.657 { 00:11:36.657 "name": "BaseBdev4", 00:11:36.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.657 "is_configured": false, 00:11:36.657 "data_offset": 0, 00:11:36.657 "data_size": 0 00:11:36.657 } 00:11:36.657 ] 00:11:36.657 }' 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.657 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.917 [2024-12-07 17:28:10.263906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.917 BaseBdev2 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.917 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.917 [ 00:11:36.917 { 00:11:36.917 "name": "BaseBdev2", 00:11:36.917 "aliases": [ 00:11:36.917 "31b1fffc-7c6d-4daa-8348-a5d92430ef46" 00:11:36.917 ], 00:11:36.917 "product_name": "Malloc disk", 00:11:36.917 "block_size": 512, 00:11:36.917 "num_blocks": 65536, 00:11:36.917 "uuid": "31b1fffc-7c6d-4daa-8348-a5d92430ef46", 00:11:36.917 "assigned_rate_limits": { 00:11:36.917 "rw_ios_per_sec": 0, 00:11:36.917 "rw_mbytes_per_sec": 0, 00:11:36.917 "r_mbytes_per_sec": 0, 00:11:36.917 "w_mbytes_per_sec": 0 00:11:36.917 }, 00:11:36.917 "claimed": true, 00:11:36.917 "claim_type": "exclusive_write", 00:11:36.917 "zoned": false, 00:11:36.917 "supported_io_types": { 00:11:37.177 "read": true, 00:11:37.177 "write": true, 00:11:37.177 "unmap": true, 00:11:37.177 "flush": true, 00:11:37.177 "reset": true, 00:11:37.177 "nvme_admin": false, 00:11:37.177 "nvme_io": false, 00:11:37.177 "nvme_io_md": false, 00:11:37.177 "write_zeroes": true, 00:11:37.177 "zcopy": true, 00:11:37.177 "get_zone_info": false, 00:11:37.177 "zone_management": false, 00:11:37.177 "zone_append": false, 00:11:37.177 "compare": false, 00:11:37.177 "compare_and_write": false, 00:11:37.177 "abort": true, 00:11:37.177 "seek_hole": false, 00:11:37.177 "seek_data": false, 00:11:37.177 "copy": true, 00:11:37.177 "nvme_iov_md": false 00:11:37.177 }, 00:11:37.177 "memory_domains": [ 00:11:37.177 { 00:11:37.177 "dma_device_id": "system", 00:11:37.177 "dma_device_type": 1 00:11:37.177 }, 00:11:37.177 { 00:11:37.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.177 "dma_device_type": 2 00:11:37.177 } 00:11:37.177 ], 00:11:37.177 "driver_specific": {} 00:11:37.177 } 00:11:37.177 ] 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.177 "name": "Existed_Raid", 00:11:37.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.177 "strip_size_kb": 0, 00:11:37.177 "state": "configuring", 00:11:37.177 "raid_level": "raid1", 00:11:37.177 "superblock": false, 00:11:37.177 "num_base_bdevs": 4, 00:11:37.177 "num_base_bdevs_discovered": 2, 00:11:37.177 "num_base_bdevs_operational": 4, 00:11:37.177 "base_bdevs_list": [ 00:11:37.177 { 00:11:37.177 "name": "BaseBdev1", 00:11:37.177 "uuid": "3cc8947f-cacb-41ab-bf44-8cfe722e3e43", 00:11:37.177 "is_configured": true, 00:11:37.177 "data_offset": 0, 00:11:37.177 "data_size": 65536 00:11:37.177 }, 00:11:37.177 { 00:11:37.177 "name": "BaseBdev2", 00:11:37.177 "uuid": "31b1fffc-7c6d-4daa-8348-a5d92430ef46", 00:11:37.177 "is_configured": true, 00:11:37.177 "data_offset": 0, 00:11:37.177 "data_size": 65536 00:11:37.177 }, 00:11:37.177 { 00:11:37.177 "name": "BaseBdev3", 00:11:37.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.177 "is_configured": false, 00:11:37.177 "data_offset": 0, 00:11:37.177 "data_size": 0 00:11:37.177 }, 00:11:37.177 { 00:11:37.177 "name": "BaseBdev4", 00:11:37.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.177 "is_configured": false, 00:11:37.177 "data_offset": 0, 00:11:37.177 "data_size": 0 00:11:37.177 } 00:11:37.177 ] 00:11:37.177 }' 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.177 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.436 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.437 [2024-12-07 17:28:10.811437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.437 BaseBdev3 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.437 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.697 [ 00:11:37.697 { 00:11:37.697 "name": "BaseBdev3", 00:11:37.697 "aliases": [ 00:11:37.697 "6f270603-91f8-4cfd-8ea1-84f04ad036cc" 00:11:37.697 ], 00:11:37.697 "product_name": "Malloc disk", 00:11:37.697 "block_size": 512, 00:11:37.697 "num_blocks": 65536, 00:11:37.697 "uuid": "6f270603-91f8-4cfd-8ea1-84f04ad036cc", 00:11:37.697 "assigned_rate_limits": { 00:11:37.697 "rw_ios_per_sec": 0, 00:11:37.697 "rw_mbytes_per_sec": 0, 00:11:37.697 "r_mbytes_per_sec": 0, 00:11:37.697 "w_mbytes_per_sec": 0 00:11:37.697 }, 00:11:37.697 "claimed": true, 00:11:37.697 "claim_type": "exclusive_write", 00:11:37.697 "zoned": false, 00:11:37.697 "supported_io_types": { 00:11:37.697 "read": true, 00:11:37.697 "write": true, 00:11:37.697 "unmap": true, 00:11:37.697 "flush": true, 00:11:37.697 "reset": true, 00:11:37.697 "nvme_admin": false, 00:11:37.697 "nvme_io": false, 00:11:37.697 "nvme_io_md": false, 00:11:37.697 "write_zeroes": true, 00:11:37.697 "zcopy": true, 00:11:37.697 "get_zone_info": false, 00:11:37.697 "zone_management": false, 00:11:37.697 "zone_append": false, 00:11:37.697 "compare": false, 00:11:37.697 "compare_and_write": false, 00:11:37.697 "abort": true, 00:11:37.697 "seek_hole": false, 00:11:37.697 "seek_data": false, 00:11:37.697 "copy": true, 00:11:37.697 "nvme_iov_md": false 00:11:37.697 }, 00:11:37.697 "memory_domains": [ 00:11:37.697 { 00:11:37.697 "dma_device_id": "system", 00:11:37.697 "dma_device_type": 1 00:11:37.697 }, 00:11:37.697 { 00:11:37.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.697 "dma_device_type": 2 00:11:37.697 } 00:11:37.697 ], 00:11:37.697 "driver_specific": {} 00:11:37.697 } 00:11:37.697 ] 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.697 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.697 "name": "Existed_Raid", 00:11:37.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.697 "strip_size_kb": 0, 00:11:37.697 "state": "configuring", 00:11:37.697 "raid_level": "raid1", 00:11:37.697 "superblock": false, 00:11:37.697 "num_base_bdevs": 4, 00:11:37.697 "num_base_bdevs_discovered": 3, 00:11:37.697 "num_base_bdevs_operational": 4, 00:11:37.697 "base_bdevs_list": [ 00:11:37.697 { 00:11:37.697 "name": "BaseBdev1", 00:11:37.697 "uuid": "3cc8947f-cacb-41ab-bf44-8cfe722e3e43", 00:11:37.697 "is_configured": true, 00:11:37.697 "data_offset": 0, 00:11:37.697 "data_size": 65536 00:11:37.697 }, 00:11:37.697 { 00:11:37.697 "name": "BaseBdev2", 00:11:37.697 "uuid": "31b1fffc-7c6d-4daa-8348-a5d92430ef46", 00:11:37.697 "is_configured": true, 00:11:37.697 "data_offset": 0, 00:11:37.697 "data_size": 65536 00:11:37.697 }, 00:11:37.697 { 00:11:37.697 "name": "BaseBdev3", 00:11:37.697 "uuid": "6f270603-91f8-4cfd-8ea1-84f04ad036cc", 00:11:37.697 "is_configured": true, 00:11:37.697 "data_offset": 0, 00:11:37.697 "data_size": 65536 00:11:37.697 }, 00:11:37.697 { 00:11:37.697 "name": "BaseBdev4", 00:11:37.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.697 "is_configured": false, 00:11:37.697 "data_offset": 0, 00:11:37.697 "data_size": 0 00:11:37.698 } 00:11:37.698 ] 00:11:37.698 }' 00:11:37.698 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.698 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.268 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:38.268 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.268 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.268 [2024-12-07 17:28:11.388407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.268 [2024-12-07 17:28:11.388471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:38.268 [2024-12-07 17:28:11.388480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:38.269 [2024-12-07 17:28:11.388784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.269 [2024-12-07 17:28:11.389021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:38.269 [2024-12-07 17:28:11.389038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:38.269 [2024-12-07 17:28:11.389351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.269 BaseBdev4 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.269 [ 00:11:38.269 { 00:11:38.269 "name": "BaseBdev4", 00:11:38.269 "aliases": [ 00:11:38.269 "b9021e41-3811-44bd-acae-9e2c048614ad" 00:11:38.269 ], 00:11:38.269 "product_name": "Malloc disk", 00:11:38.269 "block_size": 512, 00:11:38.269 "num_blocks": 65536, 00:11:38.269 "uuid": "b9021e41-3811-44bd-acae-9e2c048614ad", 00:11:38.269 "assigned_rate_limits": { 00:11:38.269 "rw_ios_per_sec": 0, 00:11:38.269 "rw_mbytes_per_sec": 0, 00:11:38.269 "r_mbytes_per_sec": 0, 00:11:38.269 "w_mbytes_per_sec": 0 00:11:38.269 }, 00:11:38.269 "claimed": true, 00:11:38.269 "claim_type": "exclusive_write", 00:11:38.269 "zoned": false, 00:11:38.269 "supported_io_types": { 00:11:38.269 "read": true, 00:11:38.269 "write": true, 00:11:38.269 "unmap": true, 00:11:38.269 "flush": true, 00:11:38.269 "reset": true, 00:11:38.269 "nvme_admin": false, 00:11:38.269 "nvme_io": false, 00:11:38.269 "nvme_io_md": false, 00:11:38.269 "write_zeroes": true, 00:11:38.269 "zcopy": true, 00:11:38.269 "get_zone_info": false, 00:11:38.269 "zone_management": false, 00:11:38.269 "zone_append": false, 00:11:38.269 "compare": false, 00:11:38.269 "compare_and_write": false, 00:11:38.269 "abort": true, 00:11:38.269 "seek_hole": false, 00:11:38.269 "seek_data": false, 00:11:38.269 "copy": true, 00:11:38.269 "nvme_iov_md": false 00:11:38.269 }, 00:11:38.269 "memory_domains": [ 00:11:38.269 { 00:11:38.269 "dma_device_id": "system", 00:11:38.269 "dma_device_type": 1 00:11:38.269 }, 00:11:38.269 { 00:11:38.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.269 "dma_device_type": 2 00:11:38.269 } 00:11:38.269 ], 00:11:38.269 "driver_specific": {} 00:11:38.269 } 00:11:38.269 ] 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.269 "name": "Existed_Raid", 00:11:38.269 "uuid": "5cd21d07-c8bc-4767-ad86-f173798bf3ba", 00:11:38.269 "strip_size_kb": 0, 00:11:38.269 "state": "online", 00:11:38.269 "raid_level": "raid1", 00:11:38.269 "superblock": false, 00:11:38.269 "num_base_bdevs": 4, 00:11:38.269 "num_base_bdevs_discovered": 4, 00:11:38.269 "num_base_bdevs_operational": 4, 00:11:38.269 "base_bdevs_list": [ 00:11:38.269 { 00:11:38.269 "name": "BaseBdev1", 00:11:38.269 "uuid": "3cc8947f-cacb-41ab-bf44-8cfe722e3e43", 00:11:38.269 "is_configured": true, 00:11:38.269 "data_offset": 0, 00:11:38.269 "data_size": 65536 00:11:38.269 }, 00:11:38.269 { 00:11:38.269 "name": "BaseBdev2", 00:11:38.269 "uuid": "31b1fffc-7c6d-4daa-8348-a5d92430ef46", 00:11:38.269 "is_configured": true, 00:11:38.269 "data_offset": 0, 00:11:38.269 "data_size": 65536 00:11:38.269 }, 00:11:38.269 { 00:11:38.269 "name": "BaseBdev3", 00:11:38.269 "uuid": "6f270603-91f8-4cfd-8ea1-84f04ad036cc", 00:11:38.269 "is_configured": true, 00:11:38.269 "data_offset": 0, 00:11:38.269 "data_size": 65536 00:11:38.269 }, 00:11:38.269 { 00:11:38.269 "name": "BaseBdev4", 00:11:38.269 "uuid": "b9021e41-3811-44bd-acae-9e2c048614ad", 00:11:38.269 "is_configured": true, 00:11:38.269 "data_offset": 0, 00:11:38.269 "data_size": 65536 00:11:38.269 } 00:11:38.269 ] 00:11:38.269 }' 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.269 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.530 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:38.530 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:38.530 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.530 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.530 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.530 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.530 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.530 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:38.530 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.530 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.530 [2024-12-07 17:28:11.892024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.790 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.790 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.790 "name": "Existed_Raid", 00:11:38.790 "aliases": [ 00:11:38.790 "5cd21d07-c8bc-4767-ad86-f173798bf3ba" 00:11:38.790 ], 00:11:38.790 "product_name": "Raid Volume", 00:11:38.790 "block_size": 512, 00:11:38.790 "num_blocks": 65536, 00:11:38.790 "uuid": "5cd21d07-c8bc-4767-ad86-f173798bf3ba", 00:11:38.790 "assigned_rate_limits": { 00:11:38.790 "rw_ios_per_sec": 0, 00:11:38.790 "rw_mbytes_per_sec": 0, 00:11:38.790 "r_mbytes_per_sec": 0, 00:11:38.790 "w_mbytes_per_sec": 0 00:11:38.790 }, 00:11:38.790 "claimed": false, 00:11:38.790 "zoned": false, 00:11:38.790 "supported_io_types": { 00:11:38.790 "read": true, 00:11:38.790 "write": true, 00:11:38.790 "unmap": false, 00:11:38.790 "flush": false, 00:11:38.790 "reset": true, 00:11:38.790 "nvme_admin": false, 00:11:38.790 "nvme_io": false, 00:11:38.790 "nvme_io_md": false, 00:11:38.790 "write_zeroes": true, 00:11:38.790 "zcopy": false, 00:11:38.790 "get_zone_info": false, 00:11:38.790 "zone_management": false, 00:11:38.790 "zone_append": false, 00:11:38.790 "compare": false, 00:11:38.790 "compare_and_write": false, 00:11:38.790 "abort": false, 00:11:38.790 "seek_hole": false, 00:11:38.790 "seek_data": false, 00:11:38.790 "copy": false, 00:11:38.790 "nvme_iov_md": false 00:11:38.790 }, 00:11:38.790 "memory_domains": [ 00:11:38.790 { 00:11:38.790 "dma_device_id": "system", 00:11:38.790 "dma_device_type": 1 00:11:38.790 }, 00:11:38.790 { 00:11:38.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.790 "dma_device_type": 2 00:11:38.790 }, 00:11:38.790 { 00:11:38.790 "dma_device_id": "system", 00:11:38.790 "dma_device_type": 1 00:11:38.790 }, 00:11:38.790 { 00:11:38.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.790 "dma_device_type": 2 00:11:38.790 }, 00:11:38.790 { 00:11:38.790 "dma_device_id": "system", 00:11:38.790 "dma_device_type": 1 00:11:38.790 }, 00:11:38.790 { 00:11:38.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.790 "dma_device_type": 2 00:11:38.790 }, 00:11:38.790 { 00:11:38.790 "dma_device_id": "system", 00:11:38.790 "dma_device_type": 1 00:11:38.790 }, 00:11:38.790 { 00:11:38.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.790 "dma_device_type": 2 00:11:38.790 } 00:11:38.790 ], 00:11:38.790 "driver_specific": { 00:11:38.790 "raid": { 00:11:38.790 "uuid": "5cd21d07-c8bc-4767-ad86-f173798bf3ba", 00:11:38.790 "strip_size_kb": 0, 00:11:38.790 "state": "online", 00:11:38.790 "raid_level": "raid1", 00:11:38.790 "superblock": false, 00:11:38.790 "num_base_bdevs": 4, 00:11:38.790 "num_base_bdevs_discovered": 4, 00:11:38.790 "num_base_bdevs_operational": 4, 00:11:38.790 "base_bdevs_list": [ 00:11:38.790 { 00:11:38.790 "name": "BaseBdev1", 00:11:38.790 "uuid": "3cc8947f-cacb-41ab-bf44-8cfe722e3e43", 00:11:38.790 "is_configured": true, 00:11:38.790 "data_offset": 0, 00:11:38.790 "data_size": 65536 00:11:38.790 }, 00:11:38.790 { 00:11:38.790 "name": "BaseBdev2", 00:11:38.790 "uuid": "31b1fffc-7c6d-4daa-8348-a5d92430ef46", 00:11:38.790 "is_configured": true, 00:11:38.790 "data_offset": 0, 00:11:38.790 "data_size": 65536 00:11:38.790 }, 00:11:38.790 { 00:11:38.790 "name": "BaseBdev3", 00:11:38.790 "uuid": "6f270603-91f8-4cfd-8ea1-84f04ad036cc", 00:11:38.790 "is_configured": true, 00:11:38.790 "data_offset": 0, 00:11:38.790 "data_size": 65536 00:11:38.790 }, 00:11:38.790 { 00:11:38.790 "name": "BaseBdev4", 00:11:38.790 "uuid": "b9021e41-3811-44bd-acae-9e2c048614ad", 00:11:38.790 "is_configured": true, 00:11:38.790 "data_offset": 0, 00:11:38.790 "data_size": 65536 00:11:38.790 } 00:11:38.790 ] 00:11:38.790 } 00:11:38.790 } 00:11:38.790 }' 00:11:38.790 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.790 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:38.790 BaseBdev2 00:11:38.790 BaseBdev3 00:11:38.790 BaseBdev4' 00:11:38.790 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.791 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.051 [2024-12-07 17:28:12.207207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.051 "name": "Existed_Raid", 00:11:39.051 "uuid": "5cd21d07-c8bc-4767-ad86-f173798bf3ba", 00:11:39.051 "strip_size_kb": 0, 00:11:39.051 "state": "online", 00:11:39.051 "raid_level": "raid1", 00:11:39.051 "superblock": false, 00:11:39.051 "num_base_bdevs": 4, 00:11:39.051 "num_base_bdevs_discovered": 3, 00:11:39.051 "num_base_bdevs_operational": 3, 00:11:39.051 "base_bdevs_list": [ 00:11:39.051 { 00:11:39.051 "name": null, 00:11:39.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.051 "is_configured": false, 00:11:39.051 "data_offset": 0, 00:11:39.051 "data_size": 65536 00:11:39.051 }, 00:11:39.051 { 00:11:39.051 "name": "BaseBdev2", 00:11:39.051 "uuid": "31b1fffc-7c6d-4daa-8348-a5d92430ef46", 00:11:39.051 "is_configured": true, 00:11:39.051 "data_offset": 0, 00:11:39.051 "data_size": 65536 00:11:39.051 }, 00:11:39.051 { 00:11:39.051 "name": "BaseBdev3", 00:11:39.051 "uuid": "6f270603-91f8-4cfd-8ea1-84f04ad036cc", 00:11:39.051 "is_configured": true, 00:11:39.051 "data_offset": 0, 00:11:39.051 "data_size": 65536 00:11:39.051 }, 00:11:39.051 { 00:11:39.051 "name": "BaseBdev4", 00:11:39.051 "uuid": "b9021e41-3811-44bd-acae-9e2c048614ad", 00:11:39.051 "is_configured": true, 00:11:39.051 "data_offset": 0, 00:11:39.051 "data_size": 65536 00:11:39.051 } 00:11:39.051 ] 00:11:39.051 }' 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.051 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.622 [2024-12-07 17:28:12.810256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.622 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.622 [2024-12-07 17:28:12.973434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.882 [2024-12-07 17:28:13.125700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:39.882 [2024-12-07 17:28:13.125826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.882 [2024-12-07 17:28:13.231425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.882 [2024-12-07 17:28:13.231580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.882 [2024-12-07 17:28:13.231628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:39.882 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.142 BaseBdev2 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.142 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.143 [ 00:11:40.143 { 00:11:40.143 "name": "BaseBdev2", 00:11:40.143 "aliases": [ 00:11:40.143 "475e2bc5-9752-4462-9579-bb40cc587457" 00:11:40.143 ], 00:11:40.143 "product_name": "Malloc disk", 00:11:40.143 "block_size": 512, 00:11:40.143 "num_blocks": 65536, 00:11:40.143 "uuid": "475e2bc5-9752-4462-9579-bb40cc587457", 00:11:40.143 "assigned_rate_limits": { 00:11:40.143 "rw_ios_per_sec": 0, 00:11:40.143 "rw_mbytes_per_sec": 0, 00:11:40.143 "r_mbytes_per_sec": 0, 00:11:40.143 "w_mbytes_per_sec": 0 00:11:40.143 }, 00:11:40.143 "claimed": false, 00:11:40.143 "zoned": false, 00:11:40.143 "supported_io_types": { 00:11:40.143 "read": true, 00:11:40.143 "write": true, 00:11:40.143 "unmap": true, 00:11:40.143 "flush": true, 00:11:40.143 "reset": true, 00:11:40.143 "nvme_admin": false, 00:11:40.143 "nvme_io": false, 00:11:40.143 "nvme_io_md": false, 00:11:40.143 "write_zeroes": true, 00:11:40.143 "zcopy": true, 00:11:40.143 "get_zone_info": false, 00:11:40.143 "zone_management": false, 00:11:40.143 "zone_append": false, 00:11:40.143 "compare": false, 00:11:40.143 "compare_and_write": false, 00:11:40.143 "abort": true, 00:11:40.143 "seek_hole": false, 00:11:40.143 "seek_data": false, 00:11:40.143 "copy": true, 00:11:40.143 "nvme_iov_md": false 00:11:40.143 }, 00:11:40.143 "memory_domains": [ 00:11:40.143 { 00:11:40.143 "dma_device_id": "system", 00:11:40.143 "dma_device_type": 1 00:11:40.143 }, 00:11:40.143 { 00:11:40.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.143 "dma_device_type": 2 00:11:40.143 } 00:11:40.143 ], 00:11:40.143 "driver_specific": {} 00:11:40.143 } 00:11:40.143 ] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.143 BaseBdev3 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.143 [ 00:11:40.143 { 00:11:40.143 "name": "BaseBdev3", 00:11:40.143 "aliases": [ 00:11:40.143 "b1f32700-2ec0-45b0-b723-05a0374ed23c" 00:11:40.143 ], 00:11:40.143 "product_name": "Malloc disk", 00:11:40.143 "block_size": 512, 00:11:40.143 "num_blocks": 65536, 00:11:40.143 "uuid": "b1f32700-2ec0-45b0-b723-05a0374ed23c", 00:11:40.143 "assigned_rate_limits": { 00:11:40.143 "rw_ios_per_sec": 0, 00:11:40.143 "rw_mbytes_per_sec": 0, 00:11:40.143 "r_mbytes_per_sec": 0, 00:11:40.143 "w_mbytes_per_sec": 0 00:11:40.143 }, 00:11:40.143 "claimed": false, 00:11:40.143 "zoned": false, 00:11:40.143 "supported_io_types": { 00:11:40.143 "read": true, 00:11:40.143 "write": true, 00:11:40.143 "unmap": true, 00:11:40.143 "flush": true, 00:11:40.143 "reset": true, 00:11:40.143 "nvme_admin": false, 00:11:40.143 "nvme_io": false, 00:11:40.143 "nvme_io_md": false, 00:11:40.143 "write_zeroes": true, 00:11:40.143 "zcopy": true, 00:11:40.143 "get_zone_info": false, 00:11:40.143 "zone_management": false, 00:11:40.143 "zone_append": false, 00:11:40.143 "compare": false, 00:11:40.143 "compare_and_write": false, 00:11:40.143 "abort": true, 00:11:40.143 "seek_hole": false, 00:11:40.143 "seek_data": false, 00:11:40.143 "copy": true, 00:11:40.143 "nvme_iov_md": false 00:11:40.143 }, 00:11:40.143 "memory_domains": [ 00:11:40.143 { 00:11:40.143 "dma_device_id": "system", 00:11:40.143 "dma_device_type": 1 00:11:40.143 }, 00:11:40.143 { 00:11:40.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.143 "dma_device_type": 2 00:11:40.143 } 00:11:40.143 ], 00:11:40.143 "driver_specific": {} 00:11:40.143 } 00:11:40.143 ] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.143 BaseBdev4 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.143 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.403 [ 00:11:40.403 { 00:11:40.403 "name": "BaseBdev4", 00:11:40.403 "aliases": [ 00:11:40.403 "756326c7-143d-46fc-9b3e-103cf65e858f" 00:11:40.403 ], 00:11:40.403 "product_name": "Malloc disk", 00:11:40.403 "block_size": 512, 00:11:40.403 "num_blocks": 65536, 00:11:40.403 "uuid": "756326c7-143d-46fc-9b3e-103cf65e858f", 00:11:40.403 "assigned_rate_limits": { 00:11:40.403 "rw_ios_per_sec": 0, 00:11:40.403 "rw_mbytes_per_sec": 0, 00:11:40.403 "r_mbytes_per_sec": 0, 00:11:40.403 "w_mbytes_per_sec": 0 00:11:40.403 }, 00:11:40.403 "claimed": false, 00:11:40.403 "zoned": false, 00:11:40.403 "supported_io_types": { 00:11:40.403 "read": true, 00:11:40.403 "write": true, 00:11:40.403 "unmap": true, 00:11:40.403 "flush": true, 00:11:40.403 "reset": true, 00:11:40.403 "nvme_admin": false, 00:11:40.403 "nvme_io": false, 00:11:40.403 "nvme_io_md": false, 00:11:40.403 "write_zeroes": true, 00:11:40.403 "zcopy": true, 00:11:40.403 "get_zone_info": false, 00:11:40.403 "zone_management": false, 00:11:40.403 "zone_append": false, 00:11:40.403 "compare": false, 00:11:40.403 "compare_and_write": false, 00:11:40.403 "abort": true, 00:11:40.403 "seek_hole": false, 00:11:40.403 "seek_data": false, 00:11:40.403 "copy": true, 00:11:40.403 "nvme_iov_md": false 00:11:40.403 }, 00:11:40.403 "memory_domains": [ 00:11:40.403 { 00:11:40.403 "dma_device_id": "system", 00:11:40.403 "dma_device_type": 1 00:11:40.403 }, 00:11:40.403 { 00:11:40.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.403 "dma_device_type": 2 00:11:40.403 } 00:11:40.403 ], 00:11:40.403 "driver_specific": {} 00:11:40.403 } 00:11:40.403 ] 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.403 [2024-12-07 17:28:13.547867] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.403 [2024-12-07 17:28:13.547978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.403 [2024-12-07 17:28:13.548023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.403 [2024-12-07 17:28:13.550138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.403 [2024-12-07 17:28:13.550227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.403 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.403 "name": "Existed_Raid", 00:11:40.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.403 "strip_size_kb": 0, 00:11:40.403 "state": "configuring", 00:11:40.403 "raid_level": "raid1", 00:11:40.403 "superblock": false, 00:11:40.403 "num_base_bdevs": 4, 00:11:40.403 "num_base_bdevs_discovered": 3, 00:11:40.403 "num_base_bdevs_operational": 4, 00:11:40.403 "base_bdevs_list": [ 00:11:40.403 { 00:11:40.403 "name": "BaseBdev1", 00:11:40.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.403 "is_configured": false, 00:11:40.403 "data_offset": 0, 00:11:40.403 "data_size": 0 00:11:40.403 }, 00:11:40.403 { 00:11:40.403 "name": "BaseBdev2", 00:11:40.403 "uuid": "475e2bc5-9752-4462-9579-bb40cc587457", 00:11:40.403 "is_configured": true, 00:11:40.404 "data_offset": 0, 00:11:40.404 "data_size": 65536 00:11:40.404 }, 00:11:40.404 { 00:11:40.404 "name": "BaseBdev3", 00:11:40.404 "uuid": "b1f32700-2ec0-45b0-b723-05a0374ed23c", 00:11:40.404 "is_configured": true, 00:11:40.404 "data_offset": 0, 00:11:40.404 "data_size": 65536 00:11:40.404 }, 00:11:40.404 { 00:11:40.404 "name": "BaseBdev4", 00:11:40.404 "uuid": "756326c7-143d-46fc-9b3e-103cf65e858f", 00:11:40.404 "is_configured": true, 00:11:40.404 "data_offset": 0, 00:11:40.404 "data_size": 65536 00:11:40.404 } 00:11:40.404 ] 00:11:40.404 }' 00:11:40.404 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.404 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.663 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:40.663 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.663 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.663 [2024-12-07 17:28:13.987107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.663 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.663 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.663 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.663 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.664 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.664 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.664 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.664 "name": "Existed_Raid", 00:11:40.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.664 "strip_size_kb": 0, 00:11:40.664 "state": "configuring", 00:11:40.664 "raid_level": "raid1", 00:11:40.664 "superblock": false, 00:11:40.664 "num_base_bdevs": 4, 00:11:40.664 "num_base_bdevs_discovered": 2, 00:11:40.664 "num_base_bdevs_operational": 4, 00:11:40.664 "base_bdevs_list": [ 00:11:40.664 { 00:11:40.664 "name": "BaseBdev1", 00:11:40.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.664 "is_configured": false, 00:11:40.664 "data_offset": 0, 00:11:40.664 "data_size": 0 00:11:40.664 }, 00:11:40.664 { 00:11:40.664 "name": null, 00:11:40.664 "uuid": "475e2bc5-9752-4462-9579-bb40cc587457", 00:11:40.664 "is_configured": false, 00:11:40.664 "data_offset": 0, 00:11:40.664 "data_size": 65536 00:11:40.664 }, 00:11:40.664 { 00:11:40.664 "name": "BaseBdev3", 00:11:40.664 "uuid": "b1f32700-2ec0-45b0-b723-05a0374ed23c", 00:11:40.664 "is_configured": true, 00:11:40.664 "data_offset": 0, 00:11:40.664 "data_size": 65536 00:11:40.664 }, 00:11:40.664 { 00:11:40.664 "name": "BaseBdev4", 00:11:40.664 "uuid": "756326c7-143d-46fc-9b3e-103cf65e858f", 00:11:40.664 "is_configured": true, 00:11:40.664 "data_offset": 0, 00:11:40.664 "data_size": 65536 00:11:40.664 } 00:11:40.664 ] 00:11:40.664 }' 00:11:40.664 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.664 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 [2024-12-07 17:28:14.484599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.240 BaseBdev1 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 [ 00:11:41.240 { 00:11:41.240 "name": "BaseBdev1", 00:11:41.240 "aliases": [ 00:11:41.240 "b27cd105-97ab-40ca-874f-bceb9425da8b" 00:11:41.240 ], 00:11:41.240 "product_name": "Malloc disk", 00:11:41.240 "block_size": 512, 00:11:41.240 "num_blocks": 65536, 00:11:41.240 "uuid": "b27cd105-97ab-40ca-874f-bceb9425da8b", 00:11:41.240 "assigned_rate_limits": { 00:11:41.240 "rw_ios_per_sec": 0, 00:11:41.240 "rw_mbytes_per_sec": 0, 00:11:41.240 "r_mbytes_per_sec": 0, 00:11:41.240 "w_mbytes_per_sec": 0 00:11:41.240 }, 00:11:41.240 "claimed": true, 00:11:41.240 "claim_type": "exclusive_write", 00:11:41.240 "zoned": false, 00:11:41.240 "supported_io_types": { 00:11:41.240 "read": true, 00:11:41.240 "write": true, 00:11:41.240 "unmap": true, 00:11:41.240 "flush": true, 00:11:41.240 "reset": true, 00:11:41.240 "nvme_admin": false, 00:11:41.240 "nvme_io": false, 00:11:41.240 "nvme_io_md": false, 00:11:41.240 "write_zeroes": true, 00:11:41.240 "zcopy": true, 00:11:41.240 "get_zone_info": false, 00:11:41.240 "zone_management": false, 00:11:41.240 "zone_append": false, 00:11:41.240 "compare": false, 00:11:41.240 "compare_and_write": false, 00:11:41.240 "abort": true, 00:11:41.240 "seek_hole": false, 00:11:41.240 "seek_data": false, 00:11:41.240 "copy": true, 00:11:41.240 "nvme_iov_md": false 00:11:41.240 }, 00:11:41.240 "memory_domains": [ 00:11:41.240 { 00:11:41.240 "dma_device_id": "system", 00:11:41.240 "dma_device_type": 1 00:11:41.240 }, 00:11:41.240 { 00:11:41.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.240 "dma_device_type": 2 00:11:41.240 } 00:11:41.240 ], 00:11:41.240 "driver_specific": {} 00:11:41.240 } 00:11:41.240 ] 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.240 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.240 "name": "Existed_Raid", 00:11:41.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.240 "strip_size_kb": 0, 00:11:41.240 "state": "configuring", 00:11:41.240 "raid_level": "raid1", 00:11:41.240 "superblock": false, 00:11:41.240 "num_base_bdevs": 4, 00:11:41.240 "num_base_bdevs_discovered": 3, 00:11:41.240 "num_base_bdevs_operational": 4, 00:11:41.240 "base_bdevs_list": [ 00:11:41.241 { 00:11:41.241 "name": "BaseBdev1", 00:11:41.241 "uuid": "b27cd105-97ab-40ca-874f-bceb9425da8b", 00:11:41.241 "is_configured": true, 00:11:41.241 "data_offset": 0, 00:11:41.241 "data_size": 65536 00:11:41.241 }, 00:11:41.241 { 00:11:41.241 "name": null, 00:11:41.241 "uuid": "475e2bc5-9752-4462-9579-bb40cc587457", 00:11:41.241 "is_configured": false, 00:11:41.241 "data_offset": 0, 00:11:41.241 "data_size": 65536 00:11:41.241 }, 00:11:41.241 { 00:11:41.241 "name": "BaseBdev3", 00:11:41.241 "uuid": "b1f32700-2ec0-45b0-b723-05a0374ed23c", 00:11:41.241 "is_configured": true, 00:11:41.241 "data_offset": 0, 00:11:41.241 "data_size": 65536 00:11:41.241 }, 00:11:41.241 { 00:11:41.241 "name": "BaseBdev4", 00:11:41.241 "uuid": "756326c7-143d-46fc-9b3e-103cf65e858f", 00:11:41.241 "is_configured": true, 00:11:41.241 "data_offset": 0, 00:11:41.241 "data_size": 65536 00:11:41.241 } 00:11:41.241 ] 00:11:41.241 }' 00:11:41.241 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.241 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.840 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.840 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.840 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.840 17:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.840 [2024-12-07 17:28:15.031729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.840 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.841 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.841 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.841 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.841 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.841 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.841 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.841 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.841 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.841 "name": "Existed_Raid", 00:11:41.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.841 "strip_size_kb": 0, 00:11:41.841 "state": "configuring", 00:11:41.841 "raid_level": "raid1", 00:11:41.841 "superblock": false, 00:11:41.841 "num_base_bdevs": 4, 00:11:41.841 "num_base_bdevs_discovered": 2, 00:11:41.841 "num_base_bdevs_operational": 4, 00:11:41.841 "base_bdevs_list": [ 00:11:41.841 { 00:11:41.841 "name": "BaseBdev1", 00:11:41.841 "uuid": "b27cd105-97ab-40ca-874f-bceb9425da8b", 00:11:41.841 "is_configured": true, 00:11:41.841 "data_offset": 0, 00:11:41.841 "data_size": 65536 00:11:41.841 }, 00:11:41.841 { 00:11:41.841 "name": null, 00:11:41.841 "uuid": "475e2bc5-9752-4462-9579-bb40cc587457", 00:11:41.841 "is_configured": false, 00:11:41.841 "data_offset": 0, 00:11:41.841 "data_size": 65536 00:11:41.841 }, 00:11:41.841 { 00:11:41.841 "name": null, 00:11:41.841 "uuid": "b1f32700-2ec0-45b0-b723-05a0374ed23c", 00:11:41.841 "is_configured": false, 00:11:41.841 "data_offset": 0, 00:11:41.841 "data_size": 65536 00:11:41.841 }, 00:11:41.841 { 00:11:41.841 "name": "BaseBdev4", 00:11:41.841 "uuid": "756326c7-143d-46fc-9b3e-103cf65e858f", 00:11:41.841 "is_configured": true, 00:11:41.841 "data_offset": 0, 00:11:41.841 "data_size": 65536 00:11:41.841 } 00:11:41.841 ] 00:11:41.841 }' 00:11:41.841 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.841 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.410 [2024-12-07 17:28:15.538893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.410 "name": "Existed_Raid", 00:11:42.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.410 "strip_size_kb": 0, 00:11:42.410 "state": "configuring", 00:11:42.410 "raid_level": "raid1", 00:11:42.410 "superblock": false, 00:11:42.410 "num_base_bdevs": 4, 00:11:42.410 "num_base_bdevs_discovered": 3, 00:11:42.410 "num_base_bdevs_operational": 4, 00:11:42.410 "base_bdevs_list": [ 00:11:42.410 { 00:11:42.410 "name": "BaseBdev1", 00:11:42.410 "uuid": "b27cd105-97ab-40ca-874f-bceb9425da8b", 00:11:42.410 "is_configured": true, 00:11:42.410 "data_offset": 0, 00:11:42.410 "data_size": 65536 00:11:42.410 }, 00:11:42.410 { 00:11:42.410 "name": null, 00:11:42.410 "uuid": "475e2bc5-9752-4462-9579-bb40cc587457", 00:11:42.410 "is_configured": false, 00:11:42.410 "data_offset": 0, 00:11:42.410 "data_size": 65536 00:11:42.410 }, 00:11:42.410 { 00:11:42.410 "name": "BaseBdev3", 00:11:42.410 "uuid": "b1f32700-2ec0-45b0-b723-05a0374ed23c", 00:11:42.410 "is_configured": true, 00:11:42.410 "data_offset": 0, 00:11:42.410 "data_size": 65536 00:11:42.410 }, 00:11:42.410 { 00:11:42.410 "name": "BaseBdev4", 00:11:42.410 "uuid": "756326c7-143d-46fc-9b3e-103cf65e858f", 00:11:42.410 "is_configured": true, 00:11:42.410 "data_offset": 0, 00:11:42.410 "data_size": 65536 00:11:42.410 } 00:11:42.410 ] 00:11:42.410 }' 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.410 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.669 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.669 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.669 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.669 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.669 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.927 [2024-12-07 17:28:16.054132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.927 "name": "Existed_Raid", 00:11:42.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.927 "strip_size_kb": 0, 00:11:42.927 "state": "configuring", 00:11:42.927 "raid_level": "raid1", 00:11:42.927 "superblock": false, 00:11:42.927 "num_base_bdevs": 4, 00:11:42.927 "num_base_bdevs_discovered": 2, 00:11:42.927 "num_base_bdevs_operational": 4, 00:11:42.927 "base_bdevs_list": [ 00:11:42.927 { 00:11:42.927 "name": null, 00:11:42.927 "uuid": "b27cd105-97ab-40ca-874f-bceb9425da8b", 00:11:42.927 "is_configured": false, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 }, 00:11:42.927 { 00:11:42.927 "name": null, 00:11:42.927 "uuid": "475e2bc5-9752-4462-9579-bb40cc587457", 00:11:42.927 "is_configured": false, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 }, 00:11:42.927 { 00:11:42.927 "name": "BaseBdev3", 00:11:42.927 "uuid": "b1f32700-2ec0-45b0-b723-05a0374ed23c", 00:11:42.927 "is_configured": true, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 }, 00:11:42.927 { 00:11:42.927 "name": "BaseBdev4", 00:11:42.927 "uuid": "756326c7-143d-46fc-9b3e-103cf65e858f", 00:11:42.927 "is_configured": true, 00:11:42.927 "data_offset": 0, 00:11:42.927 "data_size": 65536 00:11:42.927 } 00:11:42.927 ] 00:11:42.927 }' 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.927 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.186 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:43.186 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.186 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.186 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.447 [2024-12-07 17:28:16.576560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.447 "name": "Existed_Raid", 00:11:43.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.447 "strip_size_kb": 0, 00:11:43.447 "state": "configuring", 00:11:43.447 "raid_level": "raid1", 00:11:43.447 "superblock": false, 00:11:43.447 "num_base_bdevs": 4, 00:11:43.447 "num_base_bdevs_discovered": 3, 00:11:43.447 "num_base_bdevs_operational": 4, 00:11:43.447 "base_bdevs_list": [ 00:11:43.447 { 00:11:43.447 "name": null, 00:11:43.447 "uuid": "b27cd105-97ab-40ca-874f-bceb9425da8b", 00:11:43.447 "is_configured": false, 00:11:43.447 "data_offset": 0, 00:11:43.447 "data_size": 65536 00:11:43.447 }, 00:11:43.447 { 00:11:43.447 "name": "BaseBdev2", 00:11:43.447 "uuid": "475e2bc5-9752-4462-9579-bb40cc587457", 00:11:43.447 "is_configured": true, 00:11:43.447 "data_offset": 0, 00:11:43.447 "data_size": 65536 00:11:43.447 }, 00:11:43.447 { 00:11:43.447 "name": "BaseBdev3", 00:11:43.447 "uuid": "b1f32700-2ec0-45b0-b723-05a0374ed23c", 00:11:43.447 "is_configured": true, 00:11:43.447 "data_offset": 0, 00:11:43.447 "data_size": 65536 00:11:43.447 }, 00:11:43.447 { 00:11:43.447 "name": "BaseBdev4", 00:11:43.447 "uuid": "756326c7-143d-46fc-9b3e-103cf65e858f", 00:11:43.447 "is_configured": true, 00:11:43.447 "data_offset": 0, 00:11:43.447 "data_size": 65536 00:11:43.447 } 00:11:43.447 ] 00:11:43.447 }' 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.447 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.707 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.707 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.707 17:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.707 17:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:43.707 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.707 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:43.707 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.707 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.707 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.707 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:43.707 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b27cd105-97ab-40ca-874f-bceb9425da8b 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.967 [2024-12-07 17:28:17.134284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:43.967 [2024-12-07 17:28:17.134338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:43.967 [2024-12-07 17:28:17.134349] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:43.967 [2024-12-07 17:28:17.134642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:43.967 [2024-12-07 17:28:17.134825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:43.967 [2024-12-07 17:28:17.134834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:43.967 [2024-12-07 17:28:17.135149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.967 NewBaseBdev 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.967 [ 00:11:43.967 { 00:11:43.967 "name": "NewBaseBdev", 00:11:43.967 "aliases": [ 00:11:43.967 "b27cd105-97ab-40ca-874f-bceb9425da8b" 00:11:43.967 ], 00:11:43.967 "product_name": "Malloc disk", 00:11:43.967 "block_size": 512, 00:11:43.967 "num_blocks": 65536, 00:11:43.967 "uuid": "b27cd105-97ab-40ca-874f-bceb9425da8b", 00:11:43.967 "assigned_rate_limits": { 00:11:43.967 "rw_ios_per_sec": 0, 00:11:43.967 "rw_mbytes_per_sec": 0, 00:11:43.967 "r_mbytes_per_sec": 0, 00:11:43.967 "w_mbytes_per_sec": 0 00:11:43.967 }, 00:11:43.967 "claimed": true, 00:11:43.967 "claim_type": "exclusive_write", 00:11:43.967 "zoned": false, 00:11:43.967 "supported_io_types": { 00:11:43.967 "read": true, 00:11:43.967 "write": true, 00:11:43.967 "unmap": true, 00:11:43.967 "flush": true, 00:11:43.967 "reset": true, 00:11:43.967 "nvme_admin": false, 00:11:43.967 "nvme_io": false, 00:11:43.967 "nvme_io_md": false, 00:11:43.967 "write_zeroes": true, 00:11:43.967 "zcopy": true, 00:11:43.967 "get_zone_info": false, 00:11:43.967 "zone_management": false, 00:11:43.967 "zone_append": false, 00:11:43.967 "compare": false, 00:11:43.967 "compare_and_write": false, 00:11:43.967 "abort": true, 00:11:43.967 "seek_hole": false, 00:11:43.967 "seek_data": false, 00:11:43.967 "copy": true, 00:11:43.967 "nvme_iov_md": false 00:11:43.967 }, 00:11:43.967 "memory_domains": [ 00:11:43.967 { 00:11:43.967 "dma_device_id": "system", 00:11:43.967 "dma_device_type": 1 00:11:43.967 }, 00:11:43.967 { 00:11:43.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.967 "dma_device_type": 2 00:11:43.967 } 00:11:43.967 ], 00:11:43.967 "driver_specific": {} 00:11:43.967 } 00:11:43.967 ] 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.967 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.967 "name": "Existed_Raid", 00:11:43.967 "uuid": "72bfc973-6935-4ec5-b590-14e2e5edd7d6", 00:11:43.967 "strip_size_kb": 0, 00:11:43.967 "state": "online", 00:11:43.967 "raid_level": "raid1", 00:11:43.967 "superblock": false, 00:11:43.967 "num_base_bdevs": 4, 00:11:43.967 "num_base_bdevs_discovered": 4, 00:11:43.967 "num_base_bdevs_operational": 4, 00:11:43.968 "base_bdevs_list": [ 00:11:43.968 { 00:11:43.968 "name": "NewBaseBdev", 00:11:43.968 "uuid": "b27cd105-97ab-40ca-874f-bceb9425da8b", 00:11:43.968 "is_configured": true, 00:11:43.968 "data_offset": 0, 00:11:43.968 "data_size": 65536 00:11:43.968 }, 00:11:43.968 { 00:11:43.968 "name": "BaseBdev2", 00:11:43.968 "uuid": "475e2bc5-9752-4462-9579-bb40cc587457", 00:11:43.968 "is_configured": true, 00:11:43.968 "data_offset": 0, 00:11:43.968 "data_size": 65536 00:11:43.968 }, 00:11:43.968 { 00:11:43.968 "name": "BaseBdev3", 00:11:43.968 "uuid": "b1f32700-2ec0-45b0-b723-05a0374ed23c", 00:11:43.968 "is_configured": true, 00:11:43.968 "data_offset": 0, 00:11:43.968 "data_size": 65536 00:11:43.968 }, 00:11:43.968 { 00:11:43.968 "name": "BaseBdev4", 00:11:43.968 "uuid": "756326c7-143d-46fc-9b3e-103cf65e858f", 00:11:43.968 "is_configured": true, 00:11:43.968 "data_offset": 0, 00:11:43.968 "data_size": 65536 00:11:43.968 } 00:11:43.968 ] 00:11:43.968 }' 00:11:43.968 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.968 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.227 [2024-12-07 17:28:17.558044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.227 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.227 "name": "Existed_Raid", 00:11:44.227 "aliases": [ 00:11:44.227 "72bfc973-6935-4ec5-b590-14e2e5edd7d6" 00:11:44.227 ], 00:11:44.227 "product_name": "Raid Volume", 00:11:44.227 "block_size": 512, 00:11:44.227 "num_blocks": 65536, 00:11:44.227 "uuid": "72bfc973-6935-4ec5-b590-14e2e5edd7d6", 00:11:44.227 "assigned_rate_limits": { 00:11:44.227 "rw_ios_per_sec": 0, 00:11:44.227 "rw_mbytes_per_sec": 0, 00:11:44.227 "r_mbytes_per_sec": 0, 00:11:44.227 "w_mbytes_per_sec": 0 00:11:44.227 }, 00:11:44.227 "claimed": false, 00:11:44.227 "zoned": false, 00:11:44.227 "supported_io_types": { 00:11:44.227 "read": true, 00:11:44.227 "write": true, 00:11:44.227 "unmap": false, 00:11:44.227 "flush": false, 00:11:44.227 "reset": true, 00:11:44.227 "nvme_admin": false, 00:11:44.227 "nvme_io": false, 00:11:44.227 "nvme_io_md": false, 00:11:44.227 "write_zeroes": true, 00:11:44.227 "zcopy": false, 00:11:44.227 "get_zone_info": false, 00:11:44.227 "zone_management": false, 00:11:44.227 "zone_append": false, 00:11:44.227 "compare": false, 00:11:44.227 "compare_and_write": false, 00:11:44.227 "abort": false, 00:11:44.227 "seek_hole": false, 00:11:44.227 "seek_data": false, 00:11:44.227 "copy": false, 00:11:44.227 "nvme_iov_md": false 00:11:44.227 }, 00:11:44.227 "memory_domains": [ 00:11:44.227 { 00:11:44.227 "dma_device_id": "system", 00:11:44.227 "dma_device_type": 1 00:11:44.227 }, 00:11:44.227 { 00:11:44.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.227 "dma_device_type": 2 00:11:44.227 }, 00:11:44.227 { 00:11:44.227 "dma_device_id": "system", 00:11:44.227 "dma_device_type": 1 00:11:44.227 }, 00:11:44.227 { 00:11:44.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.227 "dma_device_type": 2 00:11:44.227 }, 00:11:44.227 { 00:11:44.227 "dma_device_id": "system", 00:11:44.227 "dma_device_type": 1 00:11:44.227 }, 00:11:44.227 { 00:11:44.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.227 "dma_device_type": 2 00:11:44.227 }, 00:11:44.227 { 00:11:44.227 "dma_device_id": "system", 00:11:44.227 "dma_device_type": 1 00:11:44.227 }, 00:11:44.227 { 00:11:44.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.227 "dma_device_type": 2 00:11:44.227 } 00:11:44.227 ], 00:11:44.227 "driver_specific": { 00:11:44.227 "raid": { 00:11:44.227 "uuid": "72bfc973-6935-4ec5-b590-14e2e5edd7d6", 00:11:44.227 "strip_size_kb": 0, 00:11:44.227 "state": "online", 00:11:44.227 "raid_level": "raid1", 00:11:44.227 "superblock": false, 00:11:44.227 "num_base_bdevs": 4, 00:11:44.227 "num_base_bdevs_discovered": 4, 00:11:44.228 "num_base_bdevs_operational": 4, 00:11:44.228 "base_bdevs_list": [ 00:11:44.228 { 00:11:44.228 "name": "NewBaseBdev", 00:11:44.228 "uuid": "b27cd105-97ab-40ca-874f-bceb9425da8b", 00:11:44.228 "is_configured": true, 00:11:44.228 "data_offset": 0, 00:11:44.228 "data_size": 65536 00:11:44.228 }, 00:11:44.228 { 00:11:44.228 "name": "BaseBdev2", 00:11:44.228 "uuid": "475e2bc5-9752-4462-9579-bb40cc587457", 00:11:44.228 "is_configured": true, 00:11:44.228 "data_offset": 0, 00:11:44.228 "data_size": 65536 00:11:44.228 }, 00:11:44.228 { 00:11:44.228 "name": "BaseBdev3", 00:11:44.228 "uuid": "b1f32700-2ec0-45b0-b723-05a0374ed23c", 00:11:44.228 "is_configured": true, 00:11:44.228 "data_offset": 0, 00:11:44.228 "data_size": 65536 00:11:44.228 }, 00:11:44.228 { 00:11:44.228 "name": "BaseBdev4", 00:11:44.228 "uuid": "756326c7-143d-46fc-9b3e-103cf65e858f", 00:11:44.228 "is_configured": true, 00:11:44.228 "data_offset": 0, 00:11:44.228 "data_size": 65536 00:11:44.228 } 00:11:44.228 ] 00:11:44.228 } 00:11:44.228 } 00:11:44.228 }' 00:11:44.228 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:44.489 BaseBdev2 00:11:44.489 BaseBdev3 00:11:44.489 BaseBdev4' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.489 [2024-12-07 17:28:17.853109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.489 [2024-12-07 17:28:17.853142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.489 [2024-12-07 17:28:17.853240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.489 [2024-12-07 17:28:17.853577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.489 [2024-12-07 17:28:17.853591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73203 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73203 ']' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73203 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.489 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73203 00:11:44.750 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.750 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.750 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73203' 00:11:44.750 killing process with pid 73203 00:11:44.750 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73203 00:11:44.750 [2024-12-07 17:28:17.900542] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.750 17:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73203 00:11:45.009 [2024-12-07 17:28:18.341492] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:46.392 00:11:46.392 real 0m11.782s 00:11:46.392 user 0m18.382s 00:11:46.392 sys 0m2.189s 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.392 ************************************ 00:11:46.392 END TEST raid_state_function_test 00:11:46.392 ************************************ 00:11:46.392 17:28:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:46.392 17:28:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:46.392 17:28:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.392 17:28:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.392 ************************************ 00:11:46.392 START TEST raid_state_function_test_sb 00:11:46.392 ************************************ 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73873 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73873' 00:11:46.392 Process raid pid: 73873 00:11:46.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73873 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73873 ']' 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.392 17:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.392 [2024-12-07 17:28:19.753580] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:46.392 [2024-12-07 17:28:19.753790] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.652 [2024-12-07 17:28:19.932198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.911 [2024-12-07 17:28:20.076894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.169 [2024-12-07 17:28:20.326331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.169 [2024-12-07 17:28:20.326488] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.428 [2024-12-07 17:28:20.642947] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.428 [2024-12-07 17:28:20.643077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.428 [2024-12-07 17:28:20.643111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.428 [2024-12-07 17:28:20.643137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.428 [2024-12-07 17:28:20.643163] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:47.428 [2024-12-07 17:28:20.643187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:47.428 [2024-12-07 17:28:20.643206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:47.428 [2024-12-07 17:28:20.643228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.428 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.428 "name": "Existed_Raid", 00:11:47.428 "uuid": "b0d7a2e6-6ef4-4399-95f4-d403fe582064", 00:11:47.428 "strip_size_kb": 0, 00:11:47.428 "state": "configuring", 00:11:47.428 "raid_level": "raid1", 00:11:47.428 "superblock": true, 00:11:47.428 "num_base_bdevs": 4, 00:11:47.428 "num_base_bdevs_discovered": 0, 00:11:47.428 "num_base_bdevs_operational": 4, 00:11:47.428 "base_bdevs_list": [ 00:11:47.428 { 00:11:47.428 "name": "BaseBdev1", 00:11:47.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.428 "is_configured": false, 00:11:47.428 "data_offset": 0, 00:11:47.428 "data_size": 0 00:11:47.428 }, 00:11:47.428 { 00:11:47.428 "name": "BaseBdev2", 00:11:47.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.428 "is_configured": false, 00:11:47.428 "data_offset": 0, 00:11:47.428 "data_size": 0 00:11:47.428 }, 00:11:47.428 { 00:11:47.428 "name": "BaseBdev3", 00:11:47.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.429 "is_configured": false, 00:11:47.429 "data_offset": 0, 00:11:47.429 "data_size": 0 00:11:47.429 }, 00:11:47.429 { 00:11:47.429 "name": "BaseBdev4", 00:11:47.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.429 "is_configured": false, 00:11:47.429 "data_offset": 0, 00:11:47.429 "data_size": 0 00:11:47.429 } 00:11:47.429 ] 00:11:47.429 }' 00:11:47.429 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.429 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.998 [2024-12-07 17:28:21.086145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:47.998 [2024-12-07 17:28:21.086260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.998 [2024-12-07 17:28:21.098113] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.998 [2024-12-07 17:28:21.098201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.998 [2024-12-07 17:28:21.098229] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.998 [2024-12-07 17:28:21.098254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.998 [2024-12-07 17:28:21.098273] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:47.998 [2024-12-07 17:28:21.098296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:47.998 [2024-12-07 17:28:21.098314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:47.998 [2024-12-07 17:28:21.098351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.998 [2024-12-07 17:28:21.156175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.998 BaseBdev1 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:47.998 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.999 [ 00:11:47.999 { 00:11:47.999 "name": "BaseBdev1", 00:11:47.999 "aliases": [ 00:11:47.999 "0254b792-d983-4e5b-9068-77735241fb3a" 00:11:47.999 ], 00:11:47.999 "product_name": "Malloc disk", 00:11:47.999 "block_size": 512, 00:11:47.999 "num_blocks": 65536, 00:11:47.999 "uuid": "0254b792-d983-4e5b-9068-77735241fb3a", 00:11:47.999 "assigned_rate_limits": { 00:11:47.999 "rw_ios_per_sec": 0, 00:11:47.999 "rw_mbytes_per_sec": 0, 00:11:47.999 "r_mbytes_per_sec": 0, 00:11:47.999 "w_mbytes_per_sec": 0 00:11:47.999 }, 00:11:47.999 "claimed": true, 00:11:47.999 "claim_type": "exclusive_write", 00:11:47.999 "zoned": false, 00:11:47.999 "supported_io_types": { 00:11:47.999 "read": true, 00:11:47.999 "write": true, 00:11:47.999 "unmap": true, 00:11:47.999 "flush": true, 00:11:47.999 "reset": true, 00:11:47.999 "nvme_admin": false, 00:11:47.999 "nvme_io": false, 00:11:47.999 "nvme_io_md": false, 00:11:47.999 "write_zeroes": true, 00:11:47.999 "zcopy": true, 00:11:47.999 "get_zone_info": false, 00:11:47.999 "zone_management": false, 00:11:47.999 "zone_append": false, 00:11:47.999 "compare": false, 00:11:47.999 "compare_and_write": false, 00:11:47.999 "abort": true, 00:11:47.999 "seek_hole": false, 00:11:47.999 "seek_data": false, 00:11:47.999 "copy": true, 00:11:47.999 "nvme_iov_md": false 00:11:47.999 }, 00:11:47.999 "memory_domains": [ 00:11:47.999 { 00:11:47.999 "dma_device_id": "system", 00:11:47.999 "dma_device_type": 1 00:11:47.999 }, 00:11:47.999 { 00:11:47.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.999 "dma_device_type": 2 00:11:47.999 } 00:11:47.999 ], 00:11:47.999 "driver_specific": {} 00:11:47.999 } 00:11:47.999 ] 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.999 "name": "Existed_Raid", 00:11:47.999 "uuid": "389807ec-fd65-46a4-ae5c-3e9c150679b7", 00:11:47.999 "strip_size_kb": 0, 00:11:47.999 "state": "configuring", 00:11:47.999 "raid_level": "raid1", 00:11:47.999 "superblock": true, 00:11:47.999 "num_base_bdevs": 4, 00:11:47.999 "num_base_bdevs_discovered": 1, 00:11:47.999 "num_base_bdevs_operational": 4, 00:11:47.999 "base_bdevs_list": [ 00:11:47.999 { 00:11:47.999 "name": "BaseBdev1", 00:11:47.999 "uuid": "0254b792-d983-4e5b-9068-77735241fb3a", 00:11:47.999 "is_configured": true, 00:11:47.999 "data_offset": 2048, 00:11:47.999 "data_size": 63488 00:11:47.999 }, 00:11:47.999 { 00:11:47.999 "name": "BaseBdev2", 00:11:47.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.999 "is_configured": false, 00:11:47.999 "data_offset": 0, 00:11:47.999 "data_size": 0 00:11:47.999 }, 00:11:47.999 { 00:11:47.999 "name": "BaseBdev3", 00:11:47.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.999 "is_configured": false, 00:11:47.999 "data_offset": 0, 00:11:47.999 "data_size": 0 00:11:47.999 }, 00:11:47.999 { 00:11:47.999 "name": "BaseBdev4", 00:11:47.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.999 "is_configured": false, 00:11:47.999 "data_offset": 0, 00:11:47.999 "data_size": 0 00:11:47.999 } 00:11:47.999 ] 00:11:47.999 }' 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.999 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.259 [2024-12-07 17:28:21.595511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:48.259 [2024-12-07 17:28:21.595646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.259 [2024-12-07 17:28:21.607493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.259 [2024-12-07 17:28:21.609716] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.259 [2024-12-07 17:28:21.609762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.259 [2024-12-07 17:28:21.609773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.259 [2024-12-07 17:28:21.609784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.259 [2024-12-07 17:28:21.609791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:48.259 [2024-12-07 17:28:21.609800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.259 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.520 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.520 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.520 "name": "Existed_Raid", 00:11:48.520 "uuid": "b47fbaed-8cbf-4f40-a186-696ac6ef8624", 00:11:48.520 "strip_size_kb": 0, 00:11:48.520 "state": "configuring", 00:11:48.520 "raid_level": "raid1", 00:11:48.520 "superblock": true, 00:11:48.520 "num_base_bdevs": 4, 00:11:48.520 "num_base_bdevs_discovered": 1, 00:11:48.520 "num_base_bdevs_operational": 4, 00:11:48.520 "base_bdevs_list": [ 00:11:48.520 { 00:11:48.520 "name": "BaseBdev1", 00:11:48.520 "uuid": "0254b792-d983-4e5b-9068-77735241fb3a", 00:11:48.520 "is_configured": true, 00:11:48.520 "data_offset": 2048, 00:11:48.520 "data_size": 63488 00:11:48.520 }, 00:11:48.520 { 00:11:48.520 "name": "BaseBdev2", 00:11:48.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.520 "is_configured": false, 00:11:48.520 "data_offset": 0, 00:11:48.520 "data_size": 0 00:11:48.520 }, 00:11:48.520 { 00:11:48.520 "name": "BaseBdev3", 00:11:48.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.520 "is_configured": false, 00:11:48.520 "data_offset": 0, 00:11:48.520 "data_size": 0 00:11:48.520 }, 00:11:48.520 { 00:11:48.520 "name": "BaseBdev4", 00:11:48.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.520 "is_configured": false, 00:11:48.520 "data_offset": 0, 00:11:48.520 "data_size": 0 00:11:48.520 } 00:11:48.520 ] 00:11:48.520 }' 00:11:48.520 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.520 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.780 [2024-12-07 17:28:22.107604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.780 BaseBdev2 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.780 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.780 [ 00:11:48.780 { 00:11:48.780 "name": "BaseBdev2", 00:11:48.780 "aliases": [ 00:11:48.780 "5eb83c96-9031-4d1f-b7d3-9a41d55cda29" 00:11:48.780 ], 00:11:48.780 "product_name": "Malloc disk", 00:11:48.780 "block_size": 512, 00:11:48.780 "num_blocks": 65536, 00:11:48.780 "uuid": "5eb83c96-9031-4d1f-b7d3-9a41d55cda29", 00:11:48.780 "assigned_rate_limits": { 00:11:48.780 "rw_ios_per_sec": 0, 00:11:48.780 "rw_mbytes_per_sec": 0, 00:11:48.780 "r_mbytes_per_sec": 0, 00:11:48.780 "w_mbytes_per_sec": 0 00:11:48.780 }, 00:11:48.780 "claimed": true, 00:11:48.780 "claim_type": "exclusive_write", 00:11:48.780 "zoned": false, 00:11:48.780 "supported_io_types": { 00:11:48.780 "read": true, 00:11:48.780 "write": true, 00:11:48.780 "unmap": true, 00:11:48.780 "flush": true, 00:11:48.780 "reset": true, 00:11:48.780 "nvme_admin": false, 00:11:48.780 "nvme_io": false, 00:11:48.780 "nvme_io_md": false, 00:11:48.780 "write_zeroes": true, 00:11:48.780 "zcopy": true, 00:11:48.780 "get_zone_info": false, 00:11:48.780 "zone_management": false, 00:11:48.780 "zone_append": false, 00:11:48.780 "compare": false, 00:11:48.780 "compare_and_write": false, 00:11:48.781 "abort": true, 00:11:48.781 "seek_hole": false, 00:11:48.781 "seek_data": false, 00:11:48.781 "copy": true, 00:11:48.781 "nvme_iov_md": false 00:11:48.781 }, 00:11:48.781 "memory_domains": [ 00:11:48.781 { 00:11:48.781 "dma_device_id": "system", 00:11:48.781 "dma_device_type": 1 00:11:48.781 }, 00:11:48.781 { 00:11:48.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.781 "dma_device_type": 2 00:11:48.781 } 00:11:48.781 ], 00:11:48.781 "driver_specific": {} 00:11:48.781 } 00:11:48.781 ] 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.781 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.040 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.041 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.041 "name": "Existed_Raid", 00:11:49.041 "uuid": "b47fbaed-8cbf-4f40-a186-696ac6ef8624", 00:11:49.041 "strip_size_kb": 0, 00:11:49.041 "state": "configuring", 00:11:49.041 "raid_level": "raid1", 00:11:49.041 "superblock": true, 00:11:49.041 "num_base_bdevs": 4, 00:11:49.041 "num_base_bdevs_discovered": 2, 00:11:49.041 "num_base_bdevs_operational": 4, 00:11:49.041 "base_bdevs_list": [ 00:11:49.041 { 00:11:49.041 "name": "BaseBdev1", 00:11:49.041 "uuid": "0254b792-d983-4e5b-9068-77735241fb3a", 00:11:49.041 "is_configured": true, 00:11:49.041 "data_offset": 2048, 00:11:49.041 "data_size": 63488 00:11:49.041 }, 00:11:49.041 { 00:11:49.041 "name": "BaseBdev2", 00:11:49.041 "uuid": "5eb83c96-9031-4d1f-b7d3-9a41d55cda29", 00:11:49.041 "is_configured": true, 00:11:49.041 "data_offset": 2048, 00:11:49.041 "data_size": 63488 00:11:49.041 }, 00:11:49.041 { 00:11:49.041 "name": "BaseBdev3", 00:11:49.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.041 "is_configured": false, 00:11:49.041 "data_offset": 0, 00:11:49.041 "data_size": 0 00:11:49.041 }, 00:11:49.041 { 00:11:49.041 "name": "BaseBdev4", 00:11:49.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.041 "is_configured": false, 00:11:49.041 "data_offset": 0, 00:11:49.041 "data_size": 0 00:11:49.041 } 00:11:49.041 ] 00:11:49.041 }' 00:11:49.041 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.041 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.300 [2024-12-07 17:28:22.648735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.300 BaseBdev3 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.300 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.300 [ 00:11:49.300 { 00:11:49.300 "name": "BaseBdev3", 00:11:49.300 "aliases": [ 00:11:49.300 "314795df-48f2-456d-9fdd-c5ba2e973d50" 00:11:49.300 ], 00:11:49.300 "product_name": "Malloc disk", 00:11:49.300 "block_size": 512, 00:11:49.300 "num_blocks": 65536, 00:11:49.300 "uuid": "314795df-48f2-456d-9fdd-c5ba2e973d50", 00:11:49.300 "assigned_rate_limits": { 00:11:49.300 "rw_ios_per_sec": 0, 00:11:49.300 "rw_mbytes_per_sec": 0, 00:11:49.300 "r_mbytes_per_sec": 0, 00:11:49.300 "w_mbytes_per_sec": 0 00:11:49.300 }, 00:11:49.300 "claimed": true, 00:11:49.300 "claim_type": "exclusive_write", 00:11:49.300 "zoned": false, 00:11:49.300 "supported_io_types": { 00:11:49.300 "read": true, 00:11:49.300 "write": true, 00:11:49.300 "unmap": true, 00:11:49.300 "flush": true, 00:11:49.559 "reset": true, 00:11:49.559 "nvme_admin": false, 00:11:49.559 "nvme_io": false, 00:11:49.559 "nvme_io_md": false, 00:11:49.559 "write_zeroes": true, 00:11:49.559 "zcopy": true, 00:11:49.559 "get_zone_info": false, 00:11:49.559 "zone_management": false, 00:11:49.559 "zone_append": false, 00:11:49.559 "compare": false, 00:11:49.559 "compare_and_write": false, 00:11:49.559 "abort": true, 00:11:49.559 "seek_hole": false, 00:11:49.559 "seek_data": false, 00:11:49.559 "copy": true, 00:11:49.559 "nvme_iov_md": false 00:11:49.559 }, 00:11:49.560 "memory_domains": [ 00:11:49.560 { 00:11:49.560 "dma_device_id": "system", 00:11:49.560 "dma_device_type": 1 00:11:49.560 }, 00:11:49.560 { 00:11:49.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.560 "dma_device_type": 2 00:11:49.560 } 00:11:49.560 ], 00:11:49.560 "driver_specific": {} 00:11:49.560 } 00:11:49.560 ] 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.560 "name": "Existed_Raid", 00:11:49.560 "uuid": "b47fbaed-8cbf-4f40-a186-696ac6ef8624", 00:11:49.560 "strip_size_kb": 0, 00:11:49.560 "state": "configuring", 00:11:49.560 "raid_level": "raid1", 00:11:49.560 "superblock": true, 00:11:49.560 "num_base_bdevs": 4, 00:11:49.560 "num_base_bdevs_discovered": 3, 00:11:49.560 "num_base_bdevs_operational": 4, 00:11:49.560 "base_bdevs_list": [ 00:11:49.560 { 00:11:49.560 "name": "BaseBdev1", 00:11:49.560 "uuid": "0254b792-d983-4e5b-9068-77735241fb3a", 00:11:49.560 "is_configured": true, 00:11:49.560 "data_offset": 2048, 00:11:49.560 "data_size": 63488 00:11:49.560 }, 00:11:49.560 { 00:11:49.560 "name": "BaseBdev2", 00:11:49.560 "uuid": "5eb83c96-9031-4d1f-b7d3-9a41d55cda29", 00:11:49.560 "is_configured": true, 00:11:49.560 "data_offset": 2048, 00:11:49.560 "data_size": 63488 00:11:49.560 }, 00:11:49.560 { 00:11:49.560 "name": "BaseBdev3", 00:11:49.560 "uuid": "314795df-48f2-456d-9fdd-c5ba2e973d50", 00:11:49.560 "is_configured": true, 00:11:49.560 "data_offset": 2048, 00:11:49.560 "data_size": 63488 00:11:49.560 }, 00:11:49.560 { 00:11:49.560 "name": "BaseBdev4", 00:11:49.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.560 "is_configured": false, 00:11:49.560 "data_offset": 0, 00:11:49.560 "data_size": 0 00:11:49.560 } 00:11:49.560 ] 00:11:49.560 }' 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.560 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.819 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:49.819 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.819 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.078 [2024-12-07 17:28:23.200797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:50.078 [2024-12-07 17:28:23.201237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:50.078 [2024-12-07 17:28:23.201295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:50.078 [2024-12-07 17:28:23.201642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:50.078 [2024-12-07 17:28:23.201894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:50.078 [2024-12-07 17:28:23.201963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev4 00:11:50.078 id_bdev 0x617000007e80 00:11:50.078 [2024-12-07 17:28:23.202218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.078 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.078 [ 00:11:50.078 { 00:11:50.078 "name": "BaseBdev4", 00:11:50.078 "aliases": [ 00:11:50.078 "61a97035-c3f9-4f69-9a84-99c550531e25" 00:11:50.078 ], 00:11:50.078 "product_name": "Malloc disk", 00:11:50.078 "block_size": 512, 00:11:50.078 "num_blocks": 65536, 00:11:50.078 "uuid": "61a97035-c3f9-4f69-9a84-99c550531e25", 00:11:50.078 "assigned_rate_limits": { 00:11:50.078 "rw_ios_per_sec": 0, 00:11:50.078 "rw_mbytes_per_sec": 0, 00:11:50.078 "r_mbytes_per_sec": 0, 00:11:50.078 "w_mbytes_per_sec": 0 00:11:50.078 }, 00:11:50.078 "claimed": true, 00:11:50.078 "claim_type": "exclusive_write", 00:11:50.078 "zoned": false, 00:11:50.078 "supported_io_types": { 00:11:50.078 "read": true, 00:11:50.078 "write": true, 00:11:50.078 "unmap": true, 00:11:50.078 "flush": true, 00:11:50.078 "reset": true, 00:11:50.078 "nvme_admin": false, 00:11:50.078 "nvme_io": false, 00:11:50.078 "nvme_io_md": false, 00:11:50.078 "write_zeroes": true, 00:11:50.078 "zcopy": true, 00:11:50.078 "get_zone_info": false, 00:11:50.079 "zone_management": false, 00:11:50.079 "zone_append": false, 00:11:50.079 "compare": false, 00:11:50.079 "compare_and_write": false, 00:11:50.079 "abort": true, 00:11:50.079 "seek_hole": false, 00:11:50.079 "seek_data": false, 00:11:50.079 "copy": true, 00:11:50.079 "nvme_iov_md": false 00:11:50.079 }, 00:11:50.079 "memory_domains": [ 00:11:50.079 { 00:11:50.079 "dma_device_id": "system", 00:11:50.079 "dma_device_type": 1 00:11:50.079 }, 00:11:50.079 { 00:11:50.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.079 "dma_device_type": 2 00:11:50.079 } 00:11:50.079 ], 00:11:50.079 "driver_specific": {} 00:11:50.079 } 00:11:50.079 ] 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.079 "name": "Existed_Raid", 00:11:50.079 "uuid": "b47fbaed-8cbf-4f40-a186-696ac6ef8624", 00:11:50.079 "strip_size_kb": 0, 00:11:50.079 "state": "online", 00:11:50.079 "raid_level": "raid1", 00:11:50.079 "superblock": true, 00:11:50.079 "num_base_bdevs": 4, 00:11:50.079 "num_base_bdevs_discovered": 4, 00:11:50.079 "num_base_bdevs_operational": 4, 00:11:50.079 "base_bdevs_list": [ 00:11:50.079 { 00:11:50.079 "name": "BaseBdev1", 00:11:50.079 "uuid": "0254b792-d983-4e5b-9068-77735241fb3a", 00:11:50.079 "is_configured": true, 00:11:50.079 "data_offset": 2048, 00:11:50.079 "data_size": 63488 00:11:50.079 }, 00:11:50.079 { 00:11:50.079 "name": "BaseBdev2", 00:11:50.079 "uuid": "5eb83c96-9031-4d1f-b7d3-9a41d55cda29", 00:11:50.079 "is_configured": true, 00:11:50.079 "data_offset": 2048, 00:11:50.079 "data_size": 63488 00:11:50.079 }, 00:11:50.079 { 00:11:50.079 "name": "BaseBdev3", 00:11:50.079 "uuid": "314795df-48f2-456d-9fdd-c5ba2e973d50", 00:11:50.079 "is_configured": true, 00:11:50.079 "data_offset": 2048, 00:11:50.079 "data_size": 63488 00:11:50.079 }, 00:11:50.079 { 00:11:50.079 "name": "BaseBdev4", 00:11:50.079 "uuid": "61a97035-c3f9-4f69-9a84-99c550531e25", 00:11:50.079 "is_configured": true, 00:11:50.079 "data_offset": 2048, 00:11:50.079 "data_size": 63488 00:11:50.079 } 00:11:50.079 ] 00:11:50.079 }' 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.079 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.339 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:50.339 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:50.339 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.339 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.339 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.339 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.339 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:50.339 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.339 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.339 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.339 [2024-12-07 17:28:23.712326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.599 "name": "Existed_Raid", 00:11:50.599 "aliases": [ 00:11:50.599 "b47fbaed-8cbf-4f40-a186-696ac6ef8624" 00:11:50.599 ], 00:11:50.599 "product_name": "Raid Volume", 00:11:50.599 "block_size": 512, 00:11:50.599 "num_blocks": 63488, 00:11:50.599 "uuid": "b47fbaed-8cbf-4f40-a186-696ac6ef8624", 00:11:50.599 "assigned_rate_limits": { 00:11:50.599 "rw_ios_per_sec": 0, 00:11:50.599 "rw_mbytes_per_sec": 0, 00:11:50.599 "r_mbytes_per_sec": 0, 00:11:50.599 "w_mbytes_per_sec": 0 00:11:50.599 }, 00:11:50.599 "claimed": false, 00:11:50.599 "zoned": false, 00:11:50.599 "supported_io_types": { 00:11:50.599 "read": true, 00:11:50.599 "write": true, 00:11:50.599 "unmap": false, 00:11:50.599 "flush": false, 00:11:50.599 "reset": true, 00:11:50.599 "nvme_admin": false, 00:11:50.599 "nvme_io": false, 00:11:50.599 "nvme_io_md": false, 00:11:50.599 "write_zeroes": true, 00:11:50.599 "zcopy": false, 00:11:50.599 "get_zone_info": false, 00:11:50.599 "zone_management": false, 00:11:50.599 "zone_append": false, 00:11:50.599 "compare": false, 00:11:50.599 "compare_and_write": false, 00:11:50.599 "abort": false, 00:11:50.599 "seek_hole": false, 00:11:50.599 "seek_data": false, 00:11:50.599 "copy": false, 00:11:50.599 "nvme_iov_md": false 00:11:50.599 }, 00:11:50.599 "memory_domains": [ 00:11:50.599 { 00:11:50.599 "dma_device_id": "system", 00:11:50.599 "dma_device_type": 1 00:11:50.599 }, 00:11:50.599 { 00:11:50.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.599 "dma_device_type": 2 00:11:50.599 }, 00:11:50.599 { 00:11:50.599 "dma_device_id": "system", 00:11:50.599 "dma_device_type": 1 00:11:50.599 }, 00:11:50.599 { 00:11:50.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.599 "dma_device_type": 2 00:11:50.599 }, 00:11:50.599 { 00:11:50.599 "dma_device_id": "system", 00:11:50.599 "dma_device_type": 1 00:11:50.599 }, 00:11:50.599 { 00:11:50.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.599 "dma_device_type": 2 00:11:50.599 }, 00:11:50.599 { 00:11:50.599 "dma_device_id": "system", 00:11:50.599 "dma_device_type": 1 00:11:50.599 }, 00:11:50.599 { 00:11:50.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.599 "dma_device_type": 2 00:11:50.599 } 00:11:50.599 ], 00:11:50.599 "driver_specific": { 00:11:50.599 "raid": { 00:11:50.599 "uuid": "b47fbaed-8cbf-4f40-a186-696ac6ef8624", 00:11:50.599 "strip_size_kb": 0, 00:11:50.599 "state": "online", 00:11:50.599 "raid_level": "raid1", 00:11:50.599 "superblock": true, 00:11:50.599 "num_base_bdevs": 4, 00:11:50.599 "num_base_bdevs_discovered": 4, 00:11:50.599 "num_base_bdevs_operational": 4, 00:11:50.599 "base_bdevs_list": [ 00:11:50.599 { 00:11:50.599 "name": "BaseBdev1", 00:11:50.599 "uuid": "0254b792-d983-4e5b-9068-77735241fb3a", 00:11:50.599 "is_configured": true, 00:11:50.599 "data_offset": 2048, 00:11:50.599 "data_size": 63488 00:11:50.599 }, 00:11:50.599 { 00:11:50.599 "name": "BaseBdev2", 00:11:50.599 "uuid": "5eb83c96-9031-4d1f-b7d3-9a41d55cda29", 00:11:50.599 "is_configured": true, 00:11:50.599 "data_offset": 2048, 00:11:50.599 "data_size": 63488 00:11:50.599 }, 00:11:50.599 { 00:11:50.599 "name": "BaseBdev3", 00:11:50.599 "uuid": "314795df-48f2-456d-9fdd-c5ba2e973d50", 00:11:50.599 "is_configured": true, 00:11:50.599 "data_offset": 2048, 00:11:50.599 "data_size": 63488 00:11:50.599 }, 00:11:50.599 { 00:11:50.599 "name": "BaseBdev4", 00:11:50.599 "uuid": "61a97035-c3f9-4f69-9a84-99c550531e25", 00:11:50.599 "is_configured": true, 00:11:50.599 "data_offset": 2048, 00:11:50.599 "data_size": 63488 00:11:50.599 } 00:11:50.599 ] 00:11:50.599 } 00:11:50.599 } 00:11:50.599 }' 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:50.599 BaseBdev2 00:11:50.599 BaseBdev3 00:11:50.599 BaseBdev4' 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.599 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.859 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.859 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.859 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.859 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:50.859 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.859 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.859 17:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.859 [2024-12-07 17:28:24.047444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.859 "name": "Existed_Raid", 00:11:50.859 "uuid": "b47fbaed-8cbf-4f40-a186-696ac6ef8624", 00:11:50.859 "strip_size_kb": 0, 00:11:50.859 "state": "online", 00:11:50.859 "raid_level": "raid1", 00:11:50.859 "superblock": true, 00:11:50.859 "num_base_bdevs": 4, 00:11:50.859 "num_base_bdevs_discovered": 3, 00:11:50.859 "num_base_bdevs_operational": 3, 00:11:50.859 "base_bdevs_list": [ 00:11:50.859 { 00:11:50.859 "name": null, 00:11:50.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.859 "is_configured": false, 00:11:50.859 "data_offset": 0, 00:11:50.859 "data_size": 63488 00:11:50.859 }, 00:11:50.859 { 00:11:50.859 "name": "BaseBdev2", 00:11:50.859 "uuid": "5eb83c96-9031-4d1f-b7d3-9a41d55cda29", 00:11:50.859 "is_configured": true, 00:11:50.859 "data_offset": 2048, 00:11:50.859 "data_size": 63488 00:11:50.859 }, 00:11:50.859 { 00:11:50.859 "name": "BaseBdev3", 00:11:50.859 "uuid": "314795df-48f2-456d-9fdd-c5ba2e973d50", 00:11:50.859 "is_configured": true, 00:11:50.859 "data_offset": 2048, 00:11:50.859 "data_size": 63488 00:11:50.859 }, 00:11:50.859 { 00:11:50.859 "name": "BaseBdev4", 00:11:50.859 "uuid": "61a97035-c3f9-4f69-9a84-99c550531e25", 00:11:50.859 "is_configured": true, 00:11:50.859 "data_offset": 2048, 00:11:50.859 "data_size": 63488 00:11:50.859 } 00:11:50.859 ] 00:11:50.859 }' 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.859 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.426 [2024-12-07 17:28:24.669997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.426 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.685 [2024-12-07 17:28:24.833987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.685 17:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.685 [2024-12-07 17:28:25.004824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:51.685 [2024-12-07 17:28:25.005026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.942 [2024-12-07 17:28:25.113601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.942 [2024-12-07 17:28:25.113772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.942 [2024-12-07 17:28:25.113817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.942 BaseBdev2 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.942 [ 00:11:51.942 { 00:11:51.942 "name": "BaseBdev2", 00:11:51.942 "aliases": [ 00:11:51.942 "37af8724-a526-4af3-97a7-29dc07593087" 00:11:51.942 ], 00:11:51.942 "product_name": "Malloc disk", 00:11:51.942 "block_size": 512, 00:11:51.942 "num_blocks": 65536, 00:11:51.942 "uuid": "37af8724-a526-4af3-97a7-29dc07593087", 00:11:51.942 "assigned_rate_limits": { 00:11:51.942 "rw_ios_per_sec": 0, 00:11:51.942 "rw_mbytes_per_sec": 0, 00:11:51.942 "r_mbytes_per_sec": 0, 00:11:51.942 "w_mbytes_per_sec": 0 00:11:51.942 }, 00:11:51.942 "claimed": false, 00:11:51.942 "zoned": false, 00:11:51.942 "supported_io_types": { 00:11:51.942 "read": true, 00:11:51.942 "write": true, 00:11:51.942 "unmap": true, 00:11:51.942 "flush": true, 00:11:51.942 "reset": true, 00:11:51.942 "nvme_admin": false, 00:11:51.942 "nvme_io": false, 00:11:51.942 "nvme_io_md": false, 00:11:51.942 "write_zeroes": true, 00:11:51.942 "zcopy": true, 00:11:51.942 "get_zone_info": false, 00:11:51.942 "zone_management": false, 00:11:51.942 "zone_append": false, 00:11:51.942 "compare": false, 00:11:51.942 "compare_and_write": false, 00:11:51.942 "abort": true, 00:11:51.942 "seek_hole": false, 00:11:51.942 "seek_data": false, 00:11:51.942 "copy": true, 00:11:51.942 "nvme_iov_md": false 00:11:51.942 }, 00:11:51.942 "memory_domains": [ 00:11:51.942 { 00:11:51.942 "dma_device_id": "system", 00:11:51.942 "dma_device_type": 1 00:11:51.942 }, 00:11:51.942 { 00:11:51.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.942 "dma_device_type": 2 00:11:51.942 } 00:11:51.942 ], 00:11:51.942 "driver_specific": {} 00:11:51.942 } 00:11:51.942 ] 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.942 BaseBdev3 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.942 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.200 [ 00:11:52.200 { 00:11:52.200 "name": "BaseBdev3", 00:11:52.200 "aliases": [ 00:11:52.200 "7e218220-9977-47f3-9a69-de05520f5f30" 00:11:52.200 ], 00:11:52.200 "product_name": "Malloc disk", 00:11:52.200 "block_size": 512, 00:11:52.200 "num_blocks": 65536, 00:11:52.200 "uuid": "7e218220-9977-47f3-9a69-de05520f5f30", 00:11:52.200 "assigned_rate_limits": { 00:11:52.200 "rw_ios_per_sec": 0, 00:11:52.200 "rw_mbytes_per_sec": 0, 00:11:52.200 "r_mbytes_per_sec": 0, 00:11:52.200 "w_mbytes_per_sec": 0 00:11:52.200 }, 00:11:52.200 "claimed": false, 00:11:52.200 "zoned": false, 00:11:52.200 "supported_io_types": { 00:11:52.200 "read": true, 00:11:52.200 "write": true, 00:11:52.200 "unmap": true, 00:11:52.200 "flush": true, 00:11:52.200 "reset": true, 00:11:52.200 "nvme_admin": false, 00:11:52.200 "nvme_io": false, 00:11:52.200 "nvme_io_md": false, 00:11:52.200 "write_zeroes": true, 00:11:52.200 "zcopy": true, 00:11:52.200 "get_zone_info": false, 00:11:52.200 "zone_management": false, 00:11:52.200 "zone_append": false, 00:11:52.200 "compare": false, 00:11:52.200 "compare_and_write": false, 00:11:52.200 "abort": true, 00:11:52.200 "seek_hole": false, 00:11:52.200 "seek_data": false, 00:11:52.200 "copy": true, 00:11:52.200 "nvme_iov_md": false 00:11:52.200 }, 00:11:52.200 "memory_domains": [ 00:11:52.200 { 00:11:52.200 "dma_device_id": "system", 00:11:52.200 "dma_device_type": 1 00:11:52.200 }, 00:11:52.200 { 00:11:52.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.200 "dma_device_type": 2 00:11:52.200 } 00:11:52.200 ], 00:11:52.200 "driver_specific": {} 00:11:52.200 } 00:11:52.200 ] 00:11:52.200 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.200 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.200 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:52.200 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:52.200 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:52.200 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.200 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.200 BaseBdev4 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.201 [ 00:11:52.201 { 00:11:52.201 "name": "BaseBdev4", 00:11:52.201 "aliases": [ 00:11:52.201 "b00b66b6-0a09-465f-ae44-5294fe7b1a99" 00:11:52.201 ], 00:11:52.201 "product_name": "Malloc disk", 00:11:52.201 "block_size": 512, 00:11:52.201 "num_blocks": 65536, 00:11:52.201 "uuid": "b00b66b6-0a09-465f-ae44-5294fe7b1a99", 00:11:52.201 "assigned_rate_limits": { 00:11:52.201 "rw_ios_per_sec": 0, 00:11:52.201 "rw_mbytes_per_sec": 0, 00:11:52.201 "r_mbytes_per_sec": 0, 00:11:52.201 "w_mbytes_per_sec": 0 00:11:52.201 }, 00:11:52.201 "claimed": false, 00:11:52.201 "zoned": false, 00:11:52.201 "supported_io_types": { 00:11:52.201 "read": true, 00:11:52.201 "write": true, 00:11:52.201 "unmap": true, 00:11:52.201 "flush": true, 00:11:52.201 "reset": true, 00:11:52.201 "nvme_admin": false, 00:11:52.201 "nvme_io": false, 00:11:52.201 "nvme_io_md": false, 00:11:52.201 "write_zeroes": true, 00:11:52.201 "zcopy": true, 00:11:52.201 "get_zone_info": false, 00:11:52.201 "zone_management": false, 00:11:52.201 "zone_append": false, 00:11:52.201 "compare": false, 00:11:52.201 "compare_and_write": false, 00:11:52.201 "abort": true, 00:11:52.201 "seek_hole": false, 00:11:52.201 "seek_data": false, 00:11:52.201 "copy": true, 00:11:52.201 "nvme_iov_md": false 00:11:52.201 }, 00:11:52.201 "memory_domains": [ 00:11:52.201 { 00:11:52.201 "dma_device_id": "system", 00:11:52.201 "dma_device_type": 1 00:11:52.201 }, 00:11:52.201 { 00:11:52.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.201 "dma_device_type": 2 00:11:52.201 } 00:11:52.201 ], 00:11:52.201 "driver_specific": {} 00:11:52.201 } 00:11:52.201 ] 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.201 [2024-12-07 17:28:25.438275] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.201 [2024-12-07 17:28:25.438369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.201 [2024-12-07 17:28:25.438412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.201 [2024-12-07 17:28:25.440663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.201 [2024-12-07 17:28:25.440756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.201 "name": "Existed_Raid", 00:11:52.201 "uuid": "7de183c2-4a50-49c9-9372-1c725b2116dc", 00:11:52.201 "strip_size_kb": 0, 00:11:52.201 "state": "configuring", 00:11:52.201 "raid_level": "raid1", 00:11:52.201 "superblock": true, 00:11:52.201 "num_base_bdevs": 4, 00:11:52.201 "num_base_bdevs_discovered": 3, 00:11:52.201 "num_base_bdevs_operational": 4, 00:11:52.201 "base_bdevs_list": [ 00:11:52.201 { 00:11:52.201 "name": "BaseBdev1", 00:11:52.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.201 "is_configured": false, 00:11:52.201 "data_offset": 0, 00:11:52.201 "data_size": 0 00:11:52.201 }, 00:11:52.201 { 00:11:52.201 "name": "BaseBdev2", 00:11:52.201 "uuid": "37af8724-a526-4af3-97a7-29dc07593087", 00:11:52.201 "is_configured": true, 00:11:52.201 "data_offset": 2048, 00:11:52.201 "data_size": 63488 00:11:52.201 }, 00:11:52.201 { 00:11:52.201 "name": "BaseBdev3", 00:11:52.201 "uuid": "7e218220-9977-47f3-9a69-de05520f5f30", 00:11:52.201 "is_configured": true, 00:11:52.201 "data_offset": 2048, 00:11:52.201 "data_size": 63488 00:11:52.201 }, 00:11:52.201 { 00:11:52.201 "name": "BaseBdev4", 00:11:52.201 "uuid": "b00b66b6-0a09-465f-ae44-5294fe7b1a99", 00:11:52.201 "is_configured": true, 00:11:52.201 "data_offset": 2048, 00:11:52.201 "data_size": 63488 00:11:52.201 } 00:11:52.201 ] 00:11:52.201 }' 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.201 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.459 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:52.459 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.460 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.460 [2024-12-07 17:28:25.837643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.718 "name": "Existed_Raid", 00:11:52.718 "uuid": "7de183c2-4a50-49c9-9372-1c725b2116dc", 00:11:52.718 "strip_size_kb": 0, 00:11:52.718 "state": "configuring", 00:11:52.718 "raid_level": "raid1", 00:11:52.718 "superblock": true, 00:11:52.718 "num_base_bdevs": 4, 00:11:52.718 "num_base_bdevs_discovered": 2, 00:11:52.718 "num_base_bdevs_operational": 4, 00:11:52.718 "base_bdevs_list": [ 00:11:52.718 { 00:11:52.718 "name": "BaseBdev1", 00:11:52.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.718 "is_configured": false, 00:11:52.718 "data_offset": 0, 00:11:52.718 "data_size": 0 00:11:52.718 }, 00:11:52.718 { 00:11:52.718 "name": null, 00:11:52.718 "uuid": "37af8724-a526-4af3-97a7-29dc07593087", 00:11:52.718 "is_configured": false, 00:11:52.718 "data_offset": 0, 00:11:52.718 "data_size": 63488 00:11:52.718 }, 00:11:52.718 { 00:11:52.718 "name": "BaseBdev3", 00:11:52.718 "uuid": "7e218220-9977-47f3-9a69-de05520f5f30", 00:11:52.718 "is_configured": true, 00:11:52.718 "data_offset": 2048, 00:11:52.718 "data_size": 63488 00:11:52.718 }, 00:11:52.718 { 00:11:52.718 "name": "BaseBdev4", 00:11:52.718 "uuid": "b00b66b6-0a09-465f-ae44-5294fe7b1a99", 00:11:52.718 "is_configured": true, 00:11:52.718 "data_offset": 2048, 00:11:52.718 "data_size": 63488 00:11:52.718 } 00:11:52.718 ] 00:11:52.718 }' 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.718 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.977 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.977 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.977 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.977 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.977 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.977 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:52.977 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:52.977 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.977 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.236 [2024-12-07 17:28:26.357615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.236 BaseBdev1 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.236 [ 00:11:53.236 { 00:11:53.236 "name": "BaseBdev1", 00:11:53.236 "aliases": [ 00:11:53.236 "0a89f57e-66dd-4440-941e-4c94830a468c" 00:11:53.236 ], 00:11:53.236 "product_name": "Malloc disk", 00:11:53.236 "block_size": 512, 00:11:53.236 "num_blocks": 65536, 00:11:53.236 "uuid": "0a89f57e-66dd-4440-941e-4c94830a468c", 00:11:53.236 "assigned_rate_limits": { 00:11:53.236 "rw_ios_per_sec": 0, 00:11:53.236 "rw_mbytes_per_sec": 0, 00:11:53.236 "r_mbytes_per_sec": 0, 00:11:53.236 "w_mbytes_per_sec": 0 00:11:53.236 }, 00:11:53.236 "claimed": true, 00:11:53.236 "claim_type": "exclusive_write", 00:11:53.236 "zoned": false, 00:11:53.236 "supported_io_types": { 00:11:53.236 "read": true, 00:11:53.236 "write": true, 00:11:53.236 "unmap": true, 00:11:53.236 "flush": true, 00:11:53.236 "reset": true, 00:11:53.236 "nvme_admin": false, 00:11:53.236 "nvme_io": false, 00:11:53.236 "nvme_io_md": false, 00:11:53.236 "write_zeroes": true, 00:11:53.236 "zcopy": true, 00:11:53.236 "get_zone_info": false, 00:11:53.236 "zone_management": false, 00:11:53.236 "zone_append": false, 00:11:53.236 "compare": false, 00:11:53.236 "compare_and_write": false, 00:11:53.236 "abort": true, 00:11:53.236 "seek_hole": false, 00:11:53.236 "seek_data": false, 00:11:53.236 "copy": true, 00:11:53.236 "nvme_iov_md": false 00:11:53.236 }, 00:11:53.236 "memory_domains": [ 00:11:53.236 { 00:11:53.236 "dma_device_id": "system", 00:11:53.236 "dma_device_type": 1 00:11:53.236 }, 00:11:53.236 { 00:11:53.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.236 "dma_device_type": 2 00:11:53.236 } 00:11:53.236 ], 00:11:53.236 "driver_specific": {} 00:11:53.236 } 00:11:53.236 ] 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.236 "name": "Existed_Raid", 00:11:53.236 "uuid": "7de183c2-4a50-49c9-9372-1c725b2116dc", 00:11:53.236 "strip_size_kb": 0, 00:11:53.236 "state": "configuring", 00:11:53.236 "raid_level": "raid1", 00:11:53.236 "superblock": true, 00:11:53.236 "num_base_bdevs": 4, 00:11:53.236 "num_base_bdevs_discovered": 3, 00:11:53.236 "num_base_bdevs_operational": 4, 00:11:53.236 "base_bdevs_list": [ 00:11:53.236 { 00:11:53.236 "name": "BaseBdev1", 00:11:53.236 "uuid": "0a89f57e-66dd-4440-941e-4c94830a468c", 00:11:53.236 "is_configured": true, 00:11:53.236 "data_offset": 2048, 00:11:53.236 "data_size": 63488 00:11:53.236 }, 00:11:53.236 { 00:11:53.236 "name": null, 00:11:53.236 "uuid": "37af8724-a526-4af3-97a7-29dc07593087", 00:11:53.236 "is_configured": false, 00:11:53.236 "data_offset": 0, 00:11:53.236 "data_size": 63488 00:11:53.236 }, 00:11:53.236 { 00:11:53.236 "name": "BaseBdev3", 00:11:53.236 "uuid": "7e218220-9977-47f3-9a69-de05520f5f30", 00:11:53.236 "is_configured": true, 00:11:53.236 "data_offset": 2048, 00:11:53.236 "data_size": 63488 00:11:53.236 }, 00:11:53.236 { 00:11:53.236 "name": "BaseBdev4", 00:11:53.236 "uuid": "b00b66b6-0a09-465f-ae44-5294fe7b1a99", 00:11:53.236 "is_configured": true, 00:11:53.236 "data_offset": 2048, 00:11:53.236 "data_size": 63488 00:11:53.236 } 00:11:53.236 ] 00:11:53.236 }' 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.236 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.805 [2024-12-07 17:28:26.928773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.805 "name": "Existed_Raid", 00:11:53.805 "uuid": "7de183c2-4a50-49c9-9372-1c725b2116dc", 00:11:53.805 "strip_size_kb": 0, 00:11:53.805 "state": "configuring", 00:11:53.805 "raid_level": "raid1", 00:11:53.805 "superblock": true, 00:11:53.805 "num_base_bdevs": 4, 00:11:53.805 "num_base_bdevs_discovered": 2, 00:11:53.805 "num_base_bdevs_operational": 4, 00:11:53.805 "base_bdevs_list": [ 00:11:53.805 { 00:11:53.805 "name": "BaseBdev1", 00:11:53.805 "uuid": "0a89f57e-66dd-4440-941e-4c94830a468c", 00:11:53.805 "is_configured": true, 00:11:53.805 "data_offset": 2048, 00:11:53.805 "data_size": 63488 00:11:53.805 }, 00:11:53.805 { 00:11:53.805 "name": null, 00:11:53.805 "uuid": "37af8724-a526-4af3-97a7-29dc07593087", 00:11:53.805 "is_configured": false, 00:11:53.805 "data_offset": 0, 00:11:53.805 "data_size": 63488 00:11:53.805 }, 00:11:53.805 { 00:11:53.805 "name": null, 00:11:53.805 "uuid": "7e218220-9977-47f3-9a69-de05520f5f30", 00:11:53.805 "is_configured": false, 00:11:53.805 "data_offset": 0, 00:11:53.805 "data_size": 63488 00:11:53.805 }, 00:11:53.805 { 00:11:53.805 "name": "BaseBdev4", 00:11:53.805 "uuid": "b00b66b6-0a09-465f-ae44-5294fe7b1a99", 00:11:53.805 "is_configured": true, 00:11:53.805 "data_offset": 2048, 00:11:53.805 "data_size": 63488 00:11:53.805 } 00:11:53.805 ] 00:11:53.805 }' 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.805 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.064 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.064 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.064 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.064 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:54.064 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.328 [2024-12-07 17:28:27.483827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.328 "name": "Existed_Raid", 00:11:54.328 "uuid": "7de183c2-4a50-49c9-9372-1c725b2116dc", 00:11:54.328 "strip_size_kb": 0, 00:11:54.328 "state": "configuring", 00:11:54.328 "raid_level": "raid1", 00:11:54.328 "superblock": true, 00:11:54.328 "num_base_bdevs": 4, 00:11:54.328 "num_base_bdevs_discovered": 3, 00:11:54.328 "num_base_bdevs_operational": 4, 00:11:54.328 "base_bdevs_list": [ 00:11:54.328 { 00:11:54.328 "name": "BaseBdev1", 00:11:54.328 "uuid": "0a89f57e-66dd-4440-941e-4c94830a468c", 00:11:54.328 "is_configured": true, 00:11:54.328 "data_offset": 2048, 00:11:54.328 "data_size": 63488 00:11:54.328 }, 00:11:54.328 { 00:11:54.328 "name": null, 00:11:54.328 "uuid": "37af8724-a526-4af3-97a7-29dc07593087", 00:11:54.328 "is_configured": false, 00:11:54.328 "data_offset": 0, 00:11:54.328 "data_size": 63488 00:11:54.328 }, 00:11:54.328 { 00:11:54.328 "name": "BaseBdev3", 00:11:54.328 "uuid": "7e218220-9977-47f3-9a69-de05520f5f30", 00:11:54.328 "is_configured": true, 00:11:54.328 "data_offset": 2048, 00:11:54.328 "data_size": 63488 00:11:54.328 }, 00:11:54.328 { 00:11:54.328 "name": "BaseBdev4", 00:11:54.328 "uuid": "b00b66b6-0a09-465f-ae44-5294fe7b1a99", 00:11:54.328 "is_configured": true, 00:11:54.328 "data_offset": 2048, 00:11:54.328 "data_size": 63488 00:11:54.328 } 00:11:54.328 ] 00:11:54.328 }' 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.328 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.627 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.627 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.627 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.627 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:54.627 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.908 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.908 [2024-12-07 17:28:28.006972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.908 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.908 "name": "Existed_Raid", 00:11:54.908 "uuid": "7de183c2-4a50-49c9-9372-1c725b2116dc", 00:11:54.908 "strip_size_kb": 0, 00:11:54.908 "state": "configuring", 00:11:54.908 "raid_level": "raid1", 00:11:54.908 "superblock": true, 00:11:54.908 "num_base_bdevs": 4, 00:11:54.908 "num_base_bdevs_discovered": 2, 00:11:54.908 "num_base_bdevs_operational": 4, 00:11:54.908 "base_bdevs_list": [ 00:11:54.908 { 00:11:54.908 "name": null, 00:11:54.908 "uuid": "0a89f57e-66dd-4440-941e-4c94830a468c", 00:11:54.908 "is_configured": false, 00:11:54.908 "data_offset": 0, 00:11:54.908 "data_size": 63488 00:11:54.908 }, 00:11:54.908 { 00:11:54.909 "name": null, 00:11:54.909 "uuid": "37af8724-a526-4af3-97a7-29dc07593087", 00:11:54.909 "is_configured": false, 00:11:54.909 "data_offset": 0, 00:11:54.909 "data_size": 63488 00:11:54.909 }, 00:11:54.909 { 00:11:54.909 "name": "BaseBdev3", 00:11:54.909 "uuid": "7e218220-9977-47f3-9a69-de05520f5f30", 00:11:54.909 "is_configured": true, 00:11:54.909 "data_offset": 2048, 00:11:54.909 "data_size": 63488 00:11:54.909 }, 00:11:54.909 { 00:11:54.909 "name": "BaseBdev4", 00:11:54.909 "uuid": "b00b66b6-0a09-465f-ae44-5294fe7b1a99", 00:11:54.909 "is_configured": true, 00:11:54.909 "data_offset": 2048, 00:11:54.909 "data_size": 63488 00:11:54.909 } 00:11:54.909 ] 00:11:54.909 }' 00:11:54.909 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.909 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.477 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:55.477 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.477 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.477 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.477 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.477 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:55.477 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:55.477 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.477 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.478 [2024-12-07 17:28:28.587080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.478 "name": "Existed_Raid", 00:11:55.478 "uuid": "7de183c2-4a50-49c9-9372-1c725b2116dc", 00:11:55.478 "strip_size_kb": 0, 00:11:55.478 "state": "configuring", 00:11:55.478 "raid_level": "raid1", 00:11:55.478 "superblock": true, 00:11:55.478 "num_base_bdevs": 4, 00:11:55.478 "num_base_bdevs_discovered": 3, 00:11:55.478 "num_base_bdevs_operational": 4, 00:11:55.478 "base_bdevs_list": [ 00:11:55.478 { 00:11:55.478 "name": null, 00:11:55.478 "uuid": "0a89f57e-66dd-4440-941e-4c94830a468c", 00:11:55.478 "is_configured": false, 00:11:55.478 "data_offset": 0, 00:11:55.478 "data_size": 63488 00:11:55.478 }, 00:11:55.478 { 00:11:55.478 "name": "BaseBdev2", 00:11:55.478 "uuid": "37af8724-a526-4af3-97a7-29dc07593087", 00:11:55.478 "is_configured": true, 00:11:55.478 "data_offset": 2048, 00:11:55.478 "data_size": 63488 00:11:55.478 }, 00:11:55.478 { 00:11:55.478 "name": "BaseBdev3", 00:11:55.478 "uuid": "7e218220-9977-47f3-9a69-de05520f5f30", 00:11:55.478 "is_configured": true, 00:11:55.478 "data_offset": 2048, 00:11:55.478 "data_size": 63488 00:11:55.478 }, 00:11:55.478 { 00:11:55.478 "name": "BaseBdev4", 00:11:55.478 "uuid": "b00b66b6-0a09-465f-ae44-5294fe7b1a99", 00:11:55.478 "is_configured": true, 00:11:55.478 "data_offset": 2048, 00:11:55.478 "data_size": 63488 00:11:55.478 } 00:11:55.478 ] 00:11:55.478 }' 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.478 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.737 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.738 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.738 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:55.738 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.738 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0a89f57e-66dd-4440-941e-4c94830a468c 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.738 [2024-12-07 17:28:29.112331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:55.738 [2024-12-07 17:28:29.112573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:55.738 [2024-12-07 17:28:29.112591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:55.738 [2024-12-07 17:28:29.112853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:55.738 [2024-12-07 17:28:29.113050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:55.738 [2024-12-07 17:28:29.113071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:55.738 [2024-12-07 17:28:29.113221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.738 NewBaseBdev 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.738 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.998 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.998 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:55.998 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.998 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.999 [ 00:11:55.999 { 00:11:55.999 "name": "NewBaseBdev", 00:11:55.999 "aliases": [ 00:11:55.999 "0a89f57e-66dd-4440-941e-4c94830a468c" 00:11:55.999 ], 00:11:55.999 "product_name": "Malloc disk", 00:11:55.999 "block_size": 512, 00:11:55.999 "num_blocks": 65536, 00:11:55.999 "uuid": "0a89f57e-66dd-4440-941e-4c94830a468c", 00:11:55.999 "assigned_rate_limits": { 00:11:55.999 "rw_ios_per_sec": 0, 00:11:55.999 "rw_mbytes_per_sec": 0, 00:11:55.999 "r_mbytes_per_sec": 0, 00:11:55.999 "w_mbytes_per_sec": 0 00:11:55.999 }, 00:11:55.999 "claimed": true, 00:11:55.999 "claim_type": "exclusive_write", 00:11:55.999 "zoned": false, 00:11:55.999 "supported_io_types": { 00:11:55.999 "read": true, 00:11:55.999 "write": true, 00:11:55.999 "unmap": true, 00:11:55.999 "flush": true, 00:11:55.999 "reset": true, 00:11:55.999 "nvme_admin": false, 00:11:55.999 "nvme_io": false, 00:11:55.999 "nvme_io_md": false, 00:11:55.999 "write_zeroes": true, 00:11:55.999 "zcopy": true, 00:11:55.999 "get_zone_info": false, 00:11:55.999 "zone_management": false, 00:11:55.999 "zone_append": false, 00:11:55.999 "compare": false, 00:11:55.999 "compare_and_write": false, 00:11:55.999 "abort": true, 00:11:55.999 "seek_hole": false, 00:11:55.999 "seek_data": false, 00:11:55.999 "copy": true, 00:11:55.999 "nvme_iov_md": false 00:11:55.999 }, 00:11:55.999 "memory_domains": [ 00:11:55.999 { 00:11:55.999 "dma_device_id": "system", 00:11:55.999 "dma_device_type": 1 00:11:55.999 }, 00:11:55.999 { 00:11:55.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.999 "dma_device_type": 2 00:11:55.999 } 00:11:55.999 ], 00:11:55.999 "driver_specific": {} 00:11:55.999 } 00:11:55.999 ] 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.999 "name": "Existed_Raid", 00:11:55.999 "uuid": "7de183c2-4a50-49c9-9372-1c725b2116dc", 00:11:55.999 "strip_size_kb": 0, 00:11:55.999 "state": "online", 00:11:55.999 "raid_level": "raid1", 00:11:55.999 "superblock": true, 00:11:55.999 "num_base_bdevs": 4, 00:11:55.999 "num_base_bdevs_discovered": 4, 00:11:55.999 "num_base_bdevs_operational": 4, 00:11:55.999 "base_bdevs_list": [ 00:11:55.999 { 00:11:55.999 "name": "NewBaseBdev", 00:11:55.999 "uuid": "0a89f57e-66dd-4440-941e-4c94830a468c", 00:11:55.999 "is_configured": true, 00:11:55.999 "data_offset": 2048, 00:11:55.999 "data_size": 63488 00:11:55.999 }, 00:11:55.999 { 00:11:55.999 "name": "BaseBdev2", 00:11:55.999 "uuid": "37af8724-a526-4af3-97a7-29dc07593087", 00:11:55.999 "is_configured": true, 00:11:55.999 "data_offset": 2048, 00:11:55.999 "data_size": 63488 00:11:55.999 }, 00:11:55.999 { 00:11:55.999 "name": "BaseBdev3", 00:11:55.999 "uuid": "7e218220-9977-47f3-9a69-de05520f5f30", 00:11:55.999 "is_configured": true, 00:11:55.999 "data_offset": 2048, 00:11:55.999 "data_size": 63488 00:11:55.999 }, 00:11:55.999 { 00:11:55.999 "name": "BaseBdev4", 00:11:55.999 "uuid": "b00b66b6-0a09-465f-ae44-5294fe7b1a99", 00:11:55.999 "is_configured": true, 00:11:55.999 "data_offset": 2048, 00:11:55.999 "data_size": 63488 00:11:55.999 } 00:11:55.999 ] 00:11:55.999 }' 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.999 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.259 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:56.259 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:56.259 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.259 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.259 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.259 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.259 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.259 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:56.259 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.259 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.259 [2024-12-07 17:28:29.619794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.520 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.520 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.520 "name": "Existed_Raid", 00:11:56.520 "aliases": [ 00:11:56.520 "7de183c2-4a50-49c9-9372-1c725b2116dc" 00:11:56.520 ], 00:11:56.520 "product_name": "Raid Volume", 00:11:56.520 "block_size": 512, 00:11:56.520 "num_blocks": 63488, 00:11:56.520 "uuid": "7de183c2-4a50-49c9-9372-1c725b2116dc", 00:11:56.520 "assigned_rate_limits": { 00:11:56.520 "rw_ios_per_sec": 0, 00:11:56.520 "rw_mbytes_per_sec": 0, 00:11:56.520 "r_mbytes_per_sec": 0, 00:11:56.520 "w_mbytes_per_sec": 0 00:11:56.520 }, 00:11:56.520 "claimed": false, 00:11:56.520 "zoned": false, 00:11:56.520 "supported_io_types": { 00:11:56.520 "read": true, 00:11:56.520 "write": true, 00:11:56.520 "unmap": false, 00:11:56.520 "flush": false, 00:11:56.520 "reset": true, 00:11:56.520 "nvme_admin": false, 00:11:56.520 "nvme_io": false, 00:11:56.520 "nvme_io_md": false, 00:11:56.520 "write_zeroes": true, 00:11:56.520 "zcopy": false, 00:11:56.520 "get_zone_info": false, 00:11:56.520 "zone_management": false, 00:11:56.520 "zone_append": false, 00:11:56.520 "compare": false, 00:11:56.520 "compare_and_write": false, 00:11:56.520 "abort": false, 00:11:56.520 "seek_hole": false, 00:11:56.520 "seek_data": false, 00:11:56.520 "copy": false, 00:11:56.520 "nvme_iov_md": false 00:11:56.520 }, 00:11:56.520 "memory_domains": [ 00:11:56.520 { 00:11:56.520 "dma_device_id": "system", 00:11:56.520 "dma_device_type": 1 00:11:56.520 }, 00:11:56.520 { 00:11:56.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.520 "dma_device_type": 2 00:11:56.520 }, 00:11:56.520 { 00:11:56.520 "dma_device_id": "system", 00:11:56.520 "dma_device_type": 1 00:11:56.520 }, 00:11:56.520 { 00:11:56.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.520 "dma_device_type": 2 00:11:56.520 }, 00:11:56.520 { 00:11:56.520 "dma_device_id": "system", 00:11:56.520 "dma_device_type": 1 00:11:56.520 }, 00:11:56.520 { 00:11:56.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.520 "dma_device_type": 2 00:11:56.520 }, 00:11:56.520 { 00:11:56.520 "dma_device_id": "system", 00:11:56.520 "dma_device_type": 1 00:11:56.520 }, 00:11:56.520 { 00:11:56.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.520 "dma_device_type": 2 00:11:56.520 } 00:11:56.520 ], 00:11:56.520 "driver_specific": { 00:11:56.520 "raid": { 00:11:56.520 "uuid": "7de183c2-4a50-49c9-9372-1c725b2116dc", 00:11:56.520 "strip_size_kb": 0, 00:11:56.520 "state": "online", 00:11:56.520 "raid_level": "raid1", 00:11:56.520 "superblock": true, 00:11:56.520 "num_base_bdevs": 4, 00:11:56.520 "num_base_bdevs_discovered": 4, 00:11:56.520 "num_base_bdevs_operational": 4, 00:11:56.520 "base_bdevs_list": [ 00:11:56.520 { 00:11:56.521 "name": "NewBaseBdev", 00:11:56.521 "uuid": "0a89f57e-66dd-4440-941e-4c94830a468c", 00:11:56.521 "is_configured": true, 00:11:56.521 "data_offset": 2048, 00:11:56.521 "data_size": 63488 00:11:56.521 }, 00:11:56.521 { 00:11:56.521 "name": "BaseBdev2", 00:11:56.521 "uuid": "37af8724-a526-4af3-97a7-29dc07593087", 00:11:56.521 "is_configured": true, 00:11:56.521 "data_offset": 2048, 00:11:56.521 "data_size": 63488 00:11:56.521 }, 00:11:56.521 { 00:11:56.521 "name": "BaseBdev3", 00:11:56.521 "uuid": "7e218220-9977-47f3-9a69-de05520f5f30", 00:11:56.521 "is_configured": true, 00:11:56.521 "data_offset": 2048, 00:11:56.521 "data_size": 63488 00:11:56.521 }, 00:11:56.521 { 00:11:56.521 "name": "BaseBdev4", 00:11:56.521 "uuid": "b00b66b6-0a09-465f-ae44-5294fe7b1a99", 00:11:56.521 "is_configured": true, 00:11:56.521 "data_offset": 2048, 00:11:56.521 "data_size": 63488 00:11:56.521 } 00:11:56.521 ] 00:11:56.521 } 00:11:56.521 } 00:11:56.521 }' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:56.521 BaseBdev2 00:11:56.521 BaseBdev3 00:11:56.521 BaseBdev4' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.521 [2024-12-07 17:28:29.879033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.521 [2024-12-07 17:28:29.879059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.521 [2024-12-07 17:28:29.879136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.521 [2024-12-07 17:28:29.879470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.521 [2024-12-07 17:28:29.879485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73873 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73873 ']' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73873 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.521 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73873 00:11:56.781 killing process with pid 73873 00:11:56.781 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.781 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.781 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73873' 00:11:56.781 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73873 00:11:56.781 [2024-12-07 17:28:29.925011] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.781 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73873 00:11:57.041 [2024-12-07 17:28:30.352138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:58.424 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:58.424 00:11:58.424 real 0m11.906s 00:11:58.424 user 0m18.539s 00:11:58.424 sys 0m2.319s 00:11:58.424 ************************************ 00:11:58.424 END TEST raid_state_function_test_sb 00:11:58.424 ************************************ 00:11:58.424 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.424 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.424 17:28:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:58.424 17:28:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:58.424 17:28:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.424 17:28:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.424 ************************************ 00:11:58.424 START TEST raid_superblock_test 00:11:58.425 ************************************ 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74545 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74545 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74545 ']' 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.425 17:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.425 [2024-12-07 17:28:31.736081] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:58.425 [2024-12-07 17:28:31.736224] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74545 ] 00:11:58.685 [2024-12-07 17:28:31.898742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.685 [2024-12-07 17:28:32.045335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.945 [2024-12-07 17:28:32.293171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.945 [2024-12-07 17:28:32.293371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:59.205 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.466 malloc1 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.466 [2024-12-07 17:28:32.642630] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:59.466 [2024-12-07 17:28:32.642783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.466 [2024-12-07 17:28:32.642825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:59.466 [2024-12-07 17:28:32.642854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.466 [2024-12-07 17:28:32.645394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.466 [2024-12-07 17:28:32.645473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:59.466 pt1 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.466 malloc2 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.466 [2024-12-07 17:28:32.705355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:59.466 [2024-12-07 17:28:32.705483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.466 [2024-12-07 17:28:32.705531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:59.466 [2024-12-07 17:28:32.705574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.466 [2024-12-07 17:28:32.708056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.466 [2024-12-07 17:28:32.708130] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:59.466 pt2 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.466 malloc3 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.466 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.466 [2024-12-07 17:28:32.785486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:59.467 [2024-12-07 17:28:32.785548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.467 [2024-12-07 17:28:32.785571] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:59.467 [2024-12-07 17:28:32.785580] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.467 [2024-12-07 17:28:32.788103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.467 [2024-12-07 17:28:32.788139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:59.467 pt3 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.467 malloc4 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.467 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.727 [2024-12-07 17:28:32.847588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:59.727 [2024-12-07 17:28:32.847654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.727 [2024-12-07 17:28:32.847676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:59.727 [2024-12-07 17:28:32.847686] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.728 [2024-12-07 17:28:32.850122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.728 [2024-12-07 17:28:32.850155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:59.728 pt4 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.728 [2024-12-07 17:28:32.859595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:59.728 [2024-12-07 17:28:32.861698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:59.728 [2024-12-07 17:28:32.861766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:59.728 [2024-12-07 17:28:32.861832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:59.728 [2024-12-07 17:28:32.862045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:59.728 [2024-12-07 17:28:32.862064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.728 [2024-12-07 17:28:32.862331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:59.728 [2024-12-07 17:28:32.862531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:59.728 [2024-12-07 17:28:32.862548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:59.728 [2024-12-07 17:28:32.862701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.728 "name": "raid_bdev1", 00:11:59.728 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:11:59.728 "strip_size_kb": 0, 00:11:59.728 "state": "online", 00:11:59.728 "raid_level": "raid1", 00:11:59.728 "superblock": true, 00:11:59.728 "num_base_bdevs": 4, 00:11:59.728 "num_base_bdevs_discovered": 4, 00:11:59.728 "num_base_bdevs_operational": 4, 00:11:59.728 "base_bdevs_list": [ 00:11:59.728 { 00:11:59.728 "name": "pt1", 00:11:59.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.728 "is_configured": true, 00:11:59.728 "data_offset": 2048, 00:11:59.728 "data_size": 63488 00:11:59.728 }, 00:11:59.728 { 00:11:59.728 "name": "pt2", 00:11:59.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.728 "is_configured": true, 00:11:59.728 "data_offset": 2048, 00:11:59.728 "data_size": 63488 00:11:59.728 }, 00:11:59.728 { 00:11:59.728 "name": "pt3", 00:11:59.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.728 "is_configured": true, 00:11:59.728 "data_offset": 2048, 00:11:59.728 "data_size": 63488 00:11:59.728 }, 00:11:59.728 { 00:11:59.728 "name": "pt4", 00:11:59.728 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:59.728 "is_configured": true, 00:11:59.728 "data_offset": 2048, 00:11:59.728 "data_size": 63488 00:11:59.728 } 00:11:59.728 ] 00:11:59.728 }' 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.728 17:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.989 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:59.989 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:59.989 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.989 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.989 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.989 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.989 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.989 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.989 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.989 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.989 [2024-12-07 17:28:33.355217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.249 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.249 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.249 "name": "raid_bdev1", 00:12:00.249 "aliases": [ 00:12:00.249 "4a46e376-dded-4517-9a04-9a0a781f2ec6" 00:12:00.249 ], 00:12:00.249 "product_name": "Raid Volume", 00:12:00.249 "block_size": 512, 00:12:00.249 "num_blocks": 63488, 00:12:00.249 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:00.249 "assigned_rate_limits": { 00:12:00.249 "rw_ios_per_sec": 0, 00:12:00.249 "rw_mbytes_per_sec": 0, 00:12:00.249 "r_mbytes_per_sec": 0, 00:12:00.249 "w_mbytes_per_sec": 0 00:12:00.249 }, 00:12:00.249 "claimed": false, 00:12:00.249 "zoned": false, 00:12:00.249 "supported_io_types": { 00:12:00.249 "read": true, 00:12:00.249 "write": true, 00:12:00.249 "unmap": false, 00:12:00.249 "flush": false, 00:12:00.249 "reset": true, 00:12:00.249 "nvme_admin": false, 00:12:00.249 "nvme_io": false, 00:12:00.249 "nvme_io_md": false, 00:12:00.249 "write_zeroes": true, 00:12:00.249 "zcopy": false, 00:12:00.249 "get_zone_info": false, 00:12:00.249 "zone_management": false, 00:12:00.249 "zone_append": false, 00:12:00.249 "compare": false, 00:12:00.249 "compare_and_write": false, 00:12:00.249 "abort": false, 00:12:00.249 "seek_hole": false, 00:12:00.249 "seek_data": false, 00:12:00.249 "copy": false, 00:12:00.249 "nvme_iov_md": false 00:12:00.249 }, 00:12:00.249 "memory_domains": [ 00:12:00.249 { 00:12:00.249 "dma_device_id": "system", 00:12:00.249 "dma_device_type": 1 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.249 "dma_device_type": 2 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "dma_device_id": "system", 00:12:00.249 "dma_device_type": 1 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.249 "dma_device_type": 2 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "dma_device_id": "system", 00:12:00.249 "dma_device_type": 1 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.249 "dma_device_type": 2 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "dma_device_id": "system", 00:12:00.249 "dma_device_type": 1 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.249 "dma_device_type": 2 00:12:00.249 } 00:12:00.249 ], 00:12:00.249 "driver_specific": { 00:12:00.249 "raid": { 00:12:00.249 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:00.249 "strip_size_kb": 0, 00:12:00.249 "state": "online", 00:12:00.249 "raid_level": "raid1", 00:12:00.249 "superblock": true, 00:12:00.249 "num_base_bdevs": 4, 00:12:00.249 "num_base_bdevs_discovered": 4, 00:12:00.249 "num_base_bdevs_operational": 4, 00:12:00.249 "base_bdevs_list": [ 00:12:00.249 { 00:12:00.249 "name": "pt1", 00:12:00.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.249 "is_configured": true, 00:12:00.249 "data_offset": 2048, 00:12:00.249 "data_size": 63488 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "name": "pt2", 00:12:00.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.249 "is_configured": true, 00:12:00.249 "data_offset": 2048, 00:12:00.249 "data_size": 63488 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "name": "pt3", 00:12:00.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.249 "is_configured": true, 00:12:00.249 "data_offset": 2048, 00:12:00.249 "data_size": 63488 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "name": "pt4", 00:12:00.249 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.249 "is_configured": true, 00:12:00.249 "data_offset": 2048, 00:12:00.249 "data_size": 63488 00:12:00.249 } 00:12:00.249 ] 00:12:00.249 } 00:12:00.249 } 00:12:00.249 }' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:00.250 pt2 00:12:00.250 pt3 00:12:00.250 pt4' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.250 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.511 [2024-12-07 17:28:33.666569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4a46e376-dded-4517-9a04-9a0a781f2ec6 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4a46e376-dded-4517-9a04-9a0a781f2ec6 ']' 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.511 [2024-12-07 17:28:33.714120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.511 [2024-12-07 17:28:33.714198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.511 [2024-12-07 17:28:33.714315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.511 [2024-12-07 17:28:33.714442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.511 [2024-12-07 17:28:33.714497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:00.511 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.512 [2024-12-07 17:28:33.881854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:00.512 [2024-12-07 17:28:33.884119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:00.512 [2024-12-07 17:28:33.884175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:00.512 [2024-12-07 17:28:33.884211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:00.512 [2024-12-07 17:28:33.884267] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:00.512 [2024-12-07 17:28:33.884322] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:00.512 [2024-12-07 17:28:33.884341] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:00.512 [2024-12-07 17:28:33.884359] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:00.512 [2024-12-07 17:28:33.884374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.512 [2024-12-07 17:28:33.884387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:00.512 request: 00:12:00.512 { 00:12:00.512 "name": "raid_bdev1", 00:12:00.512 "raid_level": "raid1", 00:12:00.512 "base_bdevs": [ 00:12:00.512 "malloc1", 00:12:00.512 "malloc2", 00:12:00.512 "malloc3", 00:12:00.512 "malloc4" 00:12:00.512 ], 00:12:00.512 "superblock": false, 00:12:00.512 "method": "bdev_raid_create", 00:12:00.512 "req_id": 1 00:12:00.512 } 00:12:00.512 Got JSON-RPC error response 00:12:00.512 response: 00:12:00.512 { 00:12:00.512 "code": -17, 00:12:00.512 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:00.512 } 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:00.512 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.773 [2024-12-07 17:28:33.949708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:00.773 [2024-12-07 17:28:33.949819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.773 [2024-12-07 17:28:33.949855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:00.773 [2024-12-07 17:28:33.949887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.773 [2024-12-07 17:28:33.952524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.773 [2024-12-07 17:28:33.952603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:00.773 [2024-12-07 17:28:33.952716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:00.773 [2024-12-07 17:28:33.952815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:00.773 pt1 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.773 "name": "raid_bdev1", 00:12:00.773 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:00.773 "strip_size_kb": 0, 00:12:00.773 "state": "configuring", 00:12:00.773 "raid_level": "raid1", 00:12:00.773 "superblock": true, 00:12:00.773 "num_base_bdevs": 4, 00:12:00.773 "num_base_bdevs_discovered": 1, 00:12:00.773 "num_base_bdevs_operational": 4, 00:12:00.773 "base_bdevs_list": [ 00:12:00.773 { 00:12:00.773 "name": "pt1", 00:12:00.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.773 "is_configured": true, 00:12:00.773 "data_offset": 2048, 00:12:00.773 "data_size": 63488 00:12:00.773 }, 00:12:00.773 { 00:12:00.773 "name": null, 00:12:00.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.773 "is_configured": false, 00:12:00.773 "data_offset": 2048, 00:12:00.773 "data_size": 63488 00:12:00.773 }, 00:12:00.773 { 00:12:00.773 "name": null, 00:12:00.773 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.773 "is_configured": false, 00:12:00.773 "data_offset": 2048, 00:12:00.773 "data_size": 63488 00:12:00.773 }, 00:12:00.773 { 00:12:00.773 "name": null, 00:12:00.773 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.773 "is_configured": false, 00:12:00.773 "data_offset": 2048, 00:12:00.773 "data_size": 63488 00:12:00.773 } 00:12:00.773 ] 00:12:00.773 }' 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.773 17:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.343 [2024-12-07 17:28:34.424975] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:01.343 [2024-12-07 17:28:34.425178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.343 [2024-12-07 17:28:34.425211] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:01.343 [2024-12-07 17:28:34.425226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.343 [2024-12-07 17:28:34.425809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.343 [2024-12-07 17:28:34.425833] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:01.343 [2024-12-07 17:28:34.425935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:01.343 [2024-12-07 17:28:34.425978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:01.343 pt2 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.343 [2024-12-07 17:28:34.436923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.343 "name": "raid_bdev1", 00:12:01.343 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:01.343 "strip_size_kb": 0, 00:12:01.343 "state": "configuring", 00:12:01.343 "raid_level": "raid1", 00:12:01.343 "superblock": true, 00:12:01.343 "num_base_bdevs": 4, 00:12:01.343 "num_base_bdevs_discovered": 1, 00:12:01.343 "num_base_bdevs_operational": 4, 00:12:01.343 "base_bdevs_list": [ 00:12:01.343 { 00:12:01.343 "name": "pt1", 00:12:01.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.343 "is_configured": true, 00:12:01.343 "data_offset": 2048, 00:12:01.343 "data_size": 63488 00:12:01.343 }, 00:12:01.343 { 00:12:01.343 "name": null, 00:12:01.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.343 "is_configured": false, 00:12:01.343 "data_offset": 0, 00:12:01.343 "data_size": 63488 00:12:01.343 }, 00:12:01.343 { 00:12:01.343 "name": null, 00:12:01.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.343 "is_configured": false, 00:12:01.343 "data_offset": 2048, 00:12:01.343 "data_size": 63488 00:12:01.343 }, 00:12:01.343 { 00:12:01.343 "name": null, 00:12:01.343 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:01.343 "is_configured": false, 00:12:01.343 "data_offset": 2048, 00:12:01.343 "data_size": 63488 00:12:01.343 } 00:12:01.343 ] 00:12:01.343 }' 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.343 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.605 [2024-12-07 17:28:34.888175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:01.605 [2024-12-07 17:28:34.888261] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.605 [2024-12-07 17:28:34.888286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:01.605 [2024-12-07 17:28:34.888296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.605 [2024-12-07 17:28:34.888864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.605 [2024-12-07 17:28:34.888884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:01.605 [2024-12-07 17:28:34.889027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:01.605 [2024-12-07 17:28:34.889058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:01.605 pt2 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.605 [2024-12-07 17:28:34.896112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:01.605 [2024-12-07 17:28:34.896166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.605 [2024-12-07 17:28:34.896187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:01.605 [2024-12-07 17:28:34.896197] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.605 [2024-12-07 17:28:34.896609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.605 [2024-12-07 17:28:34.896632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:01.605 [2024-12-07 17:28:34.896715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:01.605 [2024-12-07 17:28:34.896736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:01.605 pt3 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.605 [2024-12-07 17:28:34.904062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:01.605 [2024-12-07 17:28:34.904108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.605 [2024-12-07 17:28:34.904124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:01.605 [2024-12-07 17:28:34.904133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.605 [2024-12-07 17:28:34.904528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.605 [2024-12-07 17:28:34.904544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:01.605 [2024-12-07 17:28:34.904607] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:01.605 [2024-12-07 17:28:34.904632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:01.605 [2024-12-07 17:28:34.904794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:01.605 [2024-12-07 17:28:34.904803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.605 [2024-12-07 17:28:34.905112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:01.605 [2024-12-07 17:28:34.905281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:01.605 [2024-12-07 17:28:34.905302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:01.605 [2024-12-07 17:28:34.905442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.605 pt4 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.605 "name": "raid_bdev1", 00:12:01.605 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:01.605 "strip_size_kb": 0, 00:12:01.605 "state": "online", 00:12:01.605 "raid_level": "raid1", 00:12:01.605 "superblock": true, 00:12:01.605 "num_base_bdevs": 4, 00:12:01.605 "num_base_bdevs_discovered": 4, 00:12:01.605 "num_base_bdevs_operational": 4, 00:12:01.605 "base_bdevs_list": [ 00:12:01.605 { 00:12:01.605 "name": "pt1", 00:12:01.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.605 "is_configured": true, 00:12:01.605 "data_offset": 2048, 00:12:01.605 "data_size": 63488 00:12:01.605 }, 00:12:01.605 { 00:12:01.605 "name": "pt2", 00:12:01.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.605 "is_configured": true, 00:12:01.605 "data_offset": 2048, 00:12:01.605 "data_size": 63488 00:12:01.605 }, 00:12:01.605 { 00:12:01.605 "name": "pt3", 00:12:01.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.605 "is_configured": true, 00:12:01.605 "data_offset": 2048, 00:12:01.605 "data_size": 63488 00:12:01.605 }, 00:12:01.605 { 00:12:01.605 "name": "pt4", 00:12:01.605 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:01.605 "is_configured": true, 00:12:01.605 "data_offset": 2048, 00:12:01.605 "data_size": 63488 00:12:01.605 } 00:12:01.605 ] 00:12:01.605 }' 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.605 17:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.181 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:02.181 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:02.181 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:02.181 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.182 [2024-12-07 17:28:35.391726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:02.182 "name": "raid_bdev1", 00:12:02.182 "aliases": [ 00:12:02.182 "4a46e376-dded-4517-9a04-9a0a781f2ec6" 00:12:02.182 ], 00:12:02.182 "product_name": "Raid Volume", 00:12:02.182 "block_size": 512, 00:12:02.182 "num_blocks": 63488, 00:12:02.182 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:02.182 "assigned_rate_limits": { 00:12:02.182 "rw_ios_per_sec": 0, 00:12:02.182 "rw_mbytes_per_sec": 0, 00:12:02.182 "r_mbytes_per_sec": 0, 00:12:02.182 "w_mbytes_per_sec": 0 00:12:02.182 }, 00:12:02.182 "claimed": false, 00:12:02.182 "zoned": false, 00:12:02.182 "supported_io_types": { 00:12:02.182 "read": true, 00:12:02.182 "write": true, 00:12:02.182 "unmap": false, 00:12:02.182 "flush": false, 00:12:02.182 "reset": true, 00:12:02.182 "nvme_admin": false, 00:12:02.182 "nvme_io": false, 00:12:02.182 "nvme_io_md": false, 00:12:02.182 "write_zeroes": true, 00:12:02.182 "zcopy": false, 00:12:02.182 "get_zone_info": false, 00:12:02.182 "zone_management": false, 00:12:02.182 "zone_append": false, 00:12:02.182 "compare": false, 00:12:02.182 "compare_and_write": false, 00:12:02.182 "abort": false, 00:12:02.182 "seek_hole": false, 00:12:02.182 "seek_data": false, 00:12:02.182 "copy": false, 00:12:02.182 "nvme_iov_md": false 00:12:02.182 }, 00:12:02.182 "memory_domains": [ 00:12:02.182 { 00:12:02.182 "dma_device_id": "system", 00:12:02.182 "dma_device_type": 1 00:12:02.182 }, 00:12:02.182 { 00:12:02.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.182 "dma_device_type": 2 00:12:02.182 }, 00:12:02.182 { 00:12:02.182 "dma_device_id": "system", 00:12:02.182 "dma_device_type": 1 00:12:02.182 }, 00:12:02.182 { 00:12:02.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.182 "dma_device_type": 2 00:12:02.182 }, 00:12:02.182 { 00:12:02.182 "dma_device_id": "system", 00:12:02.182 "dma_device_type": 1 00:12:02.182 }, 00:12:02.182 { 00:12:02.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.182 "dma_device_type": 2 00:12:02.182 }, 00:12:02.182 { 00:12:02.182 "dma_device_id": "system", 00:12:02.182 "dma_device_type": 1 00:12:02.182 }, 00:12:02.182 { 00:12:02.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.182 "dma_device_type": 2 00:12:02.182 } 00:12:02.182 ], 00:12:02.182 "driver_specific": { 00:12:02.182 "raid": { 00:12:02.182 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:02.182 "strip_size_kb": 0, 00:12:02.182 "state": "online", 00:12:02.182 "raid_level": "raid1", 00:12:02.182 "superblock": true, 00:12:02.182 "num_base_bdevs": 4, 00:12:02.182 "num_base_bdevs_discovered": 4, 00:12:02.182 "num_base_bdevs_operational": 4, 00:12:02.182 "base_bdevs_list": [ 00:12:02.182 { 00:12:02.182 "name": "pt1", 00:12:02.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:02.182 "is_configured": true, 00:12:02.182 "data_offset": 2048, 00:12:02.182 "data_size": 63488 00:12:02.182 }, 00:12:02.182 { 00:12:02.182 "name": "pt2", 00:12:02.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.182 "is_configured": true, 00:12:02.182 "data_offset": 2048, 00:12:02.182 "data_size": 63488 00:12:02.182 }, 00:12:02.182 { 00:12:02.182 "name": "pt3", 00:12:02.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.182 "is_configured": true, 00:12:02.182 "data_offset": 2048, 00:12:02.182 "data_size": 63488 00:12:02.182 }, 00:12:02.182 { 00:12:02.182 "name": "pt4", 00:12:02.182 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:02.182 "is_configured": true, 00:12:02.182 "data_offset": 2048, 00:12:02.182 "data_size": 63488 00:12:02.182 } 00:12:02.182 ] 00:12:02.182 } 00:12:02.182 } 00:12:02.182 }' 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:02.182 pt2 00:12:02.182 pt3 00:12:02.182 pt4' 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.182 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:02.441 [2024-12-07 17:28:35.739178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4a46e376-dded-4517-9a04-9a0a781f2ec6 '!=' 4a46e376-dded-4517-9a04-9a0a781f2ec6 ']' 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.441 [2024-12-07 17:28:35.786822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.441 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.442 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.442 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.442 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.442 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.701 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.701 "name": "raid_bdev1", 00:12:02.701 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:02.701 "strip_size_kb": 0, 00:12:02.701 "state": "online", 00:12:02.701 "raid_level": "raid1", 00:12:02.701 "superblock": true, 00:12:02.701 "num_base_bdevs": 4, 00:12:02.701 "num_base_bdevs_discovered": 3, 00:12:02.701 "num_base_bdevs_operational": 3, 00:12:02.701 "base_bdevs_list": [ 00:12:02.701 { 00:12:02.701 "name": null, 00:12:02.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.701 "is_configured": false, 00:12:02.701 "data_offset": 0, 00:12:02.701 "data_size": 63488 00:12:02.701 }, 00:12:02.701 { 00:12:02.701 "name": "pt2", 00:12:02.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.701 "is_configured": true, 00:12:02.701 "data_offset": 2048, 00:12:02.701 "data_size": 63488 00:12:02.701 }, 00:12:02.701 { 00:12:02.701 "name": "pt3", 00:12:02.701 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.701 "is_configured": true, 00:12:02.701 "data_offset": 2048, 00:12:02.701 "data_size": 63488 00:12:02.701 }, 00:12:02.701 { 00:12:02.701 "name": "pt4", 00:12:02.701 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:02.701 "is_configured": true, 00:12:02.701 "data_offset": 2048, 00:12:02.701 "data_size": 63488 00:12:02.701 } 00:12:02.701 ] 00:12:02.701 }' 00:12:02.701 17:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.701 17:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.960 [2024-12-07 17:28:36.246012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.960 [2024-12-07 17:28:36.246128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.960 [2024-12-07 17:28:36.246262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.960 [2024-12-07 17:28:36.246385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.960 [2024-12-07 17:28:36.246431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.960 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.219 [2024-12-07 17:28:36.345791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:03.219 [2024-12-07 17:28:36.345858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.219 [2024-12-07 17:28:36.345879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:03.219 [2024-12-07 17:28:36.345889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.219 [2024-12-07 17:28:36.348636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.219 [2024-12-07 17:28:36.348676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:03.219 [2024-12-07 17:28:36.348765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:03.219 [2024-12-07 17:28:36.348827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:03.219 pt2 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.219 "name": "raid_bdev1", 00:12:03.219 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:03.219 "strip_size_kb": 0, 00:12:03.219 "state": "configuring", 00:12:03.219 "raid_level": "raid1", 00:12:03.219 "superblock": true, 00:12:03.219 "num_base_bdevs": 4, 00:12:03.219 "num_base_bdevs_discovered": 1, 00:12:03.219 "num_base_bdevs_operational": 3, 00:12:03.219 "base_bdevs_list": [ 00:12:03.219 { 00:12:03.219 "name": null, 00:12:03.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.219 "is_configured": false, 00:12:03.219 "data_offset": 2048, 00:12:03.219 "data_size": 63488 00:12:03.219 }, 00:12:03.219 { 00:12:03.219 "name": "pt2", 00:12:03.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.219 "is_configured": true, 00:12:03.219 "data_offset": 2048, 00:12:03.219 "data_size": 63488 00:12:03.219 }, 00:12:03.219 { 00:12:03.219 "name": null, 00:12:03.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.219 "is_configured": false, 00:12:03.219 "data_offset": 2048, 00:12:03.219 "data_size": 63488 00:12:03.219 }, 00:12:03.219 { 00:12:03.219 "name": null, 00:12:03.219 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.219 "is_configured": false, 00:12:03.219 "data_offset": 2048, 00:12:03.219 "data_size": 63488 00:12:03.219 } 00:12:03.219 ] 00:12:03.219 }' 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.219 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.478 [2024-12-07 17:28:36.733184] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:03.478 [2024-12-07 17:28:36.733272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.478 [2024-12-07 17:28:36.733298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:03.478 [2024-12-07 17:28:36.733309] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.478 [2024-12-07 17:28:36.733898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.478 [2024-12-07 17:28:36.733921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:03.478 [2024-12-07 17:28:36.734046] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:03.478 [2024-12-07 17:28:36.734077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:03.478 pt3 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.478 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.478 "name": "raid_bdev1", 00:12:03.478 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:03.478 "strip_size_kb": 0, 00:12:03.478 "state": "configuring", 00:12:03.478 "raid_level": "raid1", 00:12:03.478 "superblock": true, 00:12:03.478 "num_base_bdevs": 4, 00:12:03.478 "num_base_bdevs_discovered": 2, 00:12:03.478 "num_base_bdevs_operational": 3, 00:12:03.478 "base_bdevs_list": [ 00:12:03.478 { 00:12:03.478 "name": null, 00:12:03.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.478 "is_configured": false, 00:12:03.478 "data_offset": 2048, 00:12:03.478 "data_size": 63488 00:12:03.478 }, 00:12:03.478 { 00:12:03.478 "name": "pt2", 00:12:03.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.478 "is_configured": true, 00:12:03.478 "data_offset": 2048, 00:12:03.478 "data_size": 63488 00:12:03.478 }, 00:12:03.478 { 00:12:03.478 "name": "pt3", 00:12:03.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.478 "is_configured": true, 00:12:03.478 "data_offset": 2048, 00:12:03.478 "data_size": 63488 00:12:03.478 }, 00:12:03.478 { 00:12:03.478 "name": null, 00:12:03.478 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.478 "is_configured": false, 00:12:03.478 "data_offset": 2048, 00:12:03.478 "data_size": 63488 00:12:03.478 } 00:12:03.478 ] 00:12:03.478 }' 00:12:03.479 17:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.479 17:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.046 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:04.046 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:04.046 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:04.046 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:04.046 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.047 [2024-12-07 17:28:37.184508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:04.047 [2024-12-07 17:28:37.184701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.047 [2024-12-07 17:28:37.184756] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:04.047 [2024-12-07 17:28:37.184792] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.047 [2024-12-07 17:28:37.185414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.047 [2024-12-07 17:28:37.185478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:04.047 [2024-12-07 17:28:37.185608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:04.047 [2024-12-07 17:28:37.185661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:04.047 [2024-12-07 17:28:37.185847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:04.047 [2024-12-07 17:28:37.185884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.047 [2024-12-07 17:28:37.186202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:04.047 [2024-12-07 17:28:37.186407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:04.047 [2024-12-07 17:28:37.186451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:04.047 [2024-12-07 17:28:37.186641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.047 pt4 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.047 "name": "raid_bdev1", 00:12:04.047 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:04.047 "strip_size_kb": 0, 00:12:04.047 "state": "online", 00:12:04.047 "raid_level": "raid1", 00:12:04.047 "superblock": true, 00:12:04.047 "num_base_bdevs": 4, 00:12:04.047 "num_base_bdevs_discovered": 3, 00:12:04.047 "num_base_bdevs_operational": 3, 00:12:04.047 "base_bdevs_list": [ 00:12:04.047 { 00:12:04.047 "name": null, 00:12:04.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.047 "is_configured": false, 00:12:04.047 "data_offset": 2048, 00:12:04.047 "data_size": 63488 00:12:04.047 }, 00:12:04.047 { 00:12:04.047 "name": "pt2", 00:12:04.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.047 "is_configured": true, 00:12:04.047 "data_offset": 2048, 00:12:04.047 "data_size": 63488 00:12:04.047 }, 00:12:04.047 { 00:12:04.047 "name": "pt3", 00:12:04.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.047 "is_configured": true, 00:12:04.047 "data_offset": 2048, 00:12:04.047 "data_size": 63488 00:12:04.047 }, 00:12:04.047 { 00:12:04.047 "name": "pt4", 00:12:04.047 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.047 "is_configured": true, 00:12:04.047 "data_offset": 2048, 00:12:04.047 "data_size": 63488 00:12:04.047 } 00:12:04.047 ] 00:12:04.047 }' 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.047 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.306 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:04.306 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.306 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.306 [2024-12-07 17:28:37.667625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.306 [2024-12-07 17:28:37.667662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.306 [2024-12-07 17:28:37.667767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.306 [2024-12-07 17:28:37.667858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.306 [2024-12-07 17:28:37.667872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:04.306 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.306 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.306 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:04.306 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.306 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.566 [2024-12-07 17:28:37.743462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:04.566 [2024-12-07 17:28:37.743607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.566 [2024-12-07 17:28:37.743632] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:04.566 [2024-12-07 17:28:37.743647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.566 [2024-12-07 17:28:37.746307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.566 [2024-12-07 17:28:37.746350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:04.566 [2024-12-07 17:28:37.746444] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:04.566 [2024-12-07 17:28:37.746509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:04.566 [2024-12-07 17:28:37.746680] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:04.566 [2024-12-07 17:28:37.746696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.566 [2024-12-07 17:28:37.746714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:04.566 [2024-12-07 17:28:37.746797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.566 [2024-12-07 17:28:37.746910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:04.566 pt1 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.566 "name": "raid_bdev1", 00:12:04.566 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:04.566 "strip_size_kb": 0, 00:12:04.566 "state": "configuring", 00:12:04.566 "raid_level": "raid1", 00:12:04.566 "superblock": true, 00:12:04.566 "num_base_bdevs": 4, 00:12:04.566 "num_base_bdevs_discovered": 2, 00:12:04.566 "num_base_bdevs_operational": 3, 00:12:04.566 "base_bdevs_list": [ 00:12:04.566 { 00:12:04.566 "name": null, 00:12:04.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.566 "is_configured": false, 00:12:04.566 "data_offset": 2048, 00:12:04.566 "data_size": 63488 00:12:04.566 }, 00:12:04.566 { 00:12:04.566 "name": "pt2", 00:12:04.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.566 "is_configured": true, 00:12:04.566 "data_offset": 2048, 00:12:04.566 "data_size": 63488 00:12:04.566 }, 00:12:04.566 { 00:12:04.566 "name": "pt3", 00:12:04.566 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.566 "is_configured": true, 00:12:04.566 "data_offset": 2048, 00:12:04.566 "data_size": 63488 00:12:04.566 }, 00:12:04.566 { 00:12:04.566 "name": null, 00:12:04.566 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.566 "is_configured": false, 00:12:04.566 "data_offset": 2048, 00:12:04.566 "data_size": 63488 00:12:04.566 } 00:12:04.566 ] 00:12:04.566 }' 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.566 17:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.826 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:04.826 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.826 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.826 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:04.826 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.086 [2024-12-07 17:28:38.242671] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:05.086 [2024-12-07 17:28:38.242811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.086 [2024-12-07 17:28:38.242855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:05.086 [2024-12-07 17:28:38.242889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.086 [2024-12-07 17:28:38.243512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.086 [2024-12-07 17:28:38.243576] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:05.086 [2024-12-07 17:28:38.243710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:05.086 [2024-12-07 17:28:38.243765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:05.086 [2024-12-07 17:28:38.243966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:05.086 [2024-12-07 17:28:38.244005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:05.086 [2024-12-07 17:28:38.244321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:05.086 [2024-12-07 17:28:38.244524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:05.086 [2024-12-07 17:28:38.244565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:05.086 [2024-12-07 17:28:38.244776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.086 pt4 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.086 "name": "raid_bdev1", 00:12:05.086 "uuid": "4a46e376-dded-4517-9a04-9a0a781f2ec6", 00:12:05.086 "strip_size_kb": 0, 00:12:05.086 "state": "online", 00:12:05.086 "raid_level": "raid1", 00:12:05.086 "superblock": true, 00:12:05.086 "num_base_bdevs": 4, 00:12:05.086 "num_base_bdevs_discovered": 3, 00:12:05.086 "num_base_bdevs_operational": 3, 00:12:05.086 "base_bdevs_list": [ 00:12:05.086 { 00:12:05.086 "name": null, 00:12:05.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.086 "is_configured": false, 00:12:05.086 "data_offset": 2048, 00:12:05.086 "data_size": 63488 00:12:05.086 }, 00:12:05.086 { 00:12:05.086 "name": "pt2", 00:12:05.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.086 "is_configured": true, 00:12:05.086 "data_offset": 2048, 00:12:05.086 "data_size": 63488 00:12:05.086 }, 00:12:05.086 { 00:12:05.086 "name": "pt3", 00:12:05.086 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.086 "is_configured": true, 00:12:05.086 "data_offset": 2048, 00:12:05.086 "data_size": 63488 00:12:05.086 }, 00:12:05.086 { 00:12:05.086 "name": "pt4", 00:12:05.086 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.086 "is_configured": true, 00:12:05.086 "data_offset": 2048, 00:12:05.086 "data_size": 63488 00:12:05.086 } 00:12:05.086 ] 00:12:05.086 }' 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.086 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.351 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:05.351 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:05.351 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.351 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.351 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.351 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.610 [2024-12-07 17:28:38.742170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4a46e376-dded-4517-9a04-9a0a781f2ec6 '!=' 4a46e376-dded-4517-9a04-9a0a781f2ec6 ']' 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74545 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74545 ']' 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74545 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74545 00:12:05.610 killing process with pid 74545 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74545' 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74545 00:12:05.610 [2024-12-07 17:28:38.820917] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.610 [2024-12-07 17:28:38.821051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.610 17:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74545 00:12:05.610 [2024-12-07 17:28:38.821145] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.610 [2024-12-07 17:28:38.821161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:06.179 [2024-12-07 17:28:39.268921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.561 17:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:07.561 00:12:07.561 real 0m8.950s 00:12:07.561 user 0m13.735s 00:12:07.561 sys 0m1.758s 00:12:07.561 17:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.561 17:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.561 ************************************ 00:12:07.561 END TEST raid_superblock_test 00:12:07.561 ************************************ 00:12:07.561 17:28:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:07.561 17:28:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:07.561 17:28:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.561 17:28:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.561 ************************************ 00:12:07.561 START TEST raid_read_error_test 00:12:07.561 ************************************ 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.k55tBEVoGQ 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75036 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75036 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:07.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75036 ']' 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.561 17:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.561 [2024-12-07 17:28:40.781575] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:07.561 [2024-12-07 17:28:40.781716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75036 ] 00:12:07.820 [2024-12-07 17:28:40.965901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.820 [2024-12-07 17:28:41.110570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.078 [2024-12-07 17:28:41.359093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.078 [2024-12-07 17:28:41.359149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.336 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.336 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:08.336 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.336 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:08.336 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.336 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.336 BaseBdev1_malloc 00:12:08.336 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.336 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:08.336 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.337 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.337 true 00:12:08.337 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.337 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:08.337 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.337 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.337 [2024-12-07 17:28:41.672385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:08.337 [2024-12-07 17:28:41.672458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.337 [2024-12-07 17:28:41.672482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:08.337 [2024-12-07 17:28:41.672495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.337 [2024-12-07 17:28:41.675030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.337 [2024-12-07 17:28:41.675066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:08.337 BaseBdev1 00:12:08.337 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.337 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.337 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:08.337 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.337 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.596 BaseBdev2_malloc 00:12:08.596 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.596 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:08.596 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.596 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.596 true 00:12:08.596 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.596 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:08.596 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.596 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.596 [2024-12-07 17:28:41.748519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:08.596 [2024-12-07 17:28:41.748670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.596 [2024-12-07 17:28:41.748691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:08.596 [2024-12-07 17:28:41.748704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.596 [2024-12-07 17:28:41.751161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.596 [2024-12-07 17:28:41.751248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:08.597 BaseBdev2 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.597 BaseBdev3_malloc 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.597 true 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.597 [2024-12-07 17:28:41.837620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:08.597 [2024-12-07 17:28:41.837756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.597 [2024-12-07 17:28:41.837790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:08.597 [2024-12-07 17:28:41.837823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.597 [2024-12-07 17:28:41.840300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.597 [2024-12-07 17:28:41.840396] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:08.597 BaseBdev3 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.597 BaseBdev4_malloc 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.597 true 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.597 [2024-12-07 17:28:41.911497] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:08.597 [2024-12-07 17:28:41.911629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.597 [2024-12-07 17:28:41.911662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:08.597 [2024-12-07 17:28:41.911695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.597 [2024-12-07 17:28:41.914110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.597 [2024-12-07 17:28:41.914183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:08.597 BaseBdev4 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.597 [2024-12-07 17:28:41.923546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.597 [2024-12-07 17:28:41.925665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.597 [2024-12-07 17:28:41.925783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.597 [2024-12-07 17:28:41.925863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:08.597 [2024-12-07 17:28:41.926153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:08.597 [2024-12-07 17:28:41.926203] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:08.597 [2024-12-07 17:28:41.926456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:08.597 [2024-12-07 17:28:41.926637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:08.597 [2024-12-07 17:28:41.926647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:08.597 [2024-12-07 17:28:41.926816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.597 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.856 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.857 "name": "raid_bdev1", 00:12:08.857 "uuid": "ef8a4de5-8b4e-41ed-8dd7-8808192cf763", 00:12:08.857 "strip_size_kb": 0, 00:12:08.857 "state": "online", 00:12:08.857 "raid_level": "raid1", 00:12:08.857 "superblock": true, 00:12:08.857 "num_base_bdevs": 4, 00:12:08.857 "num_base_bdevs_discovered": 4, 00:12:08.857 "num_base_bdevs_operational": 4, 00:12:08.857 "base_bdevs_list": [ 00:12:08.857 { 00:12:08.857 "name": "BaseBdev1", 00:12:08.857 "uuid": "54c1702b-4582-54b2-9f57-f7f5489d909d", 00:12:08.857 "is_configured": true, 00:12:08.857 "data_offset": 2048, 00:12:08.857 "data_size": 63488 00:12:08.857 }, 00:12:08.857 { 00:12:08.857 "name": "BaseBdev2", 00:12:08.857 "uuid": "50d7f0d0-0116-5a83-b5ff-5acdbcfec9a2", 00:12:08.857 "is_configured": true, 00:12:08.857 "data_offset": 2048, 00:12:08.857 "data_size": 63488 00:12:08.857 }, 00:12:08.857 { 00:12:08.857 "name": "BaseBdev3", 00:12:08.857 "uuid": "825b2077-915e-5469-8d93-7d9228d3a2d9", 00:12:08.857 "is_configured": true, 00:12:08.857 "data_offset": 2048, 00:12:08.857 "data_size": 63488 00:12:08.857 }, 00:12:08.857 { 00:12:08.857 "name": "BaseBdev4", 00:12:08.857 "uuid": "893e22e9-4c1e-5dc1-9102-b688ad2328ee", 00:12:08.857 "is_configured": true, 00:12:08.857 "data_offset": 2048, 00:12:08.857 "data_size": 63488 00:12:08.857 } 00:12:08.857 ] 00:12:08.857 }' 00:12:08.857 17:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.857 17:28:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.116 17:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:09.116 17:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:09.116 [2024-12-07 17:28:42.436156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.052 "name": "raid_bdev1", 00:12:10.052 "uuid": "ef8a4de5-8b4e-41ed-8dd7-8808192cf763", 00:12:10.052 "strip_size_kb": 0, 00:12:10.052 "state": "online", 00:12:10.052 "raid_level": "raid1", 00:12:10.052 "superblock": true, 00:12:10.052 "num_base_bdevs": 4, 00:12:10.052 "num_base_bdevs_discovered": 4, 00:12:10.052 "num_base_bdevs_operational": 4, 00:12:10.052 "base_bdevs_list": [ 00:12:10.052 { 00:12:10.052 "name": "BaseBdev1", 00:12:10.052 "uuid": "54c1702b-4582-54b2-9f57-f7f5489d909d", 00:12:10.052 "is_configured": true, 00:12:10.052 "data_offset": 2048, 00:12:10.052 "data_size": 63488 00:12:10.052 }, 00:12:10.052 { 00:12:10.052 "name": "BaseBdev2", 00:12:10.052 "uuid": "50d7f0d0-0116-5a83-b5ff-5acdbcfec9a2", 00:12:10.052 "is_configured": true, 00:12:10.052 "data_offset": 2048, 00:12:10.052 "data_size": 63488 00:12:10.052 }, 00:12:10.052 { 00:12:10.052 "name": "BaseBdev3", 00:12:10.052 "uuid": "825b2077-915e-5469-8d93-7d9228d3a2d9", 00:12:10.052 "is_configured": true, 00:12:10.052 "data_offset": 2048, 00:12:10.052 "data_size": 63488 00:12:10.052 }, 00:12:10.052 { 00:12:10.052 "name": "BaseBdev4", 00:12:10.052 "uuid": "893e22e9-4c1e-5dc1-9102-b688ad2328ee", 00:12:10.052 "is_configured": true, 00:12:10.052 "data_offset": 2048, 00:12:10.052 "data_size": 63488 00:12:10.052 } 00:12:10.052 ] 00:12:10.052 }' 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.052 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.620 [2024-12-07 17:28:43.776023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.620 [2024-12-07 17:28:43.776154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.620 [2024-12-07 17:28:43.779023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.620 [2024-12-07 17:28:43.779135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.620 [2024-12-07 17:28:43.779313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.620 [2024-12-07 17:28:43.779367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:10.620 { 00:12:10.620 "results": [ 00:12:10.620 { 00:12:10.620 "job": "raid_bdev1", 00:12:10.620 "core_mask": "0x1", 00:12:10.620 "workload": "randrw", 00:12:10.620 "percentage": 50, 00:12:10.620 "status": "finished", 00:12:10.620 "queue_depth": 1, 00:12:10.620 "io_size": 131072, 00:12:10.620 "runtime": 1.340587, 00:12:10.620 "iops": 8033.048209478386, 00:12:10.620 "mibps": 1004.1310261847982, 00:12:10.620 "io_failed": 0, 00:12:10.620 "io_timeout": 0, 00:12:10.620 "avg_latency_us": 121.924250791026, 00:12:10.620 "min_latency_us": 23.02882096069869, 00:12:10.620 "max_latency_us": 1888.810480349345 00:12:10.620 } 00:12:10.620 ], 00:12:10.620 "core_count": 1 00:12:10.620 } 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75036 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75036 ']' 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75036 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75036 00:12:10.620 killing process with pid 75036 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75036' 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75036 00:12:10.620 [2024-12-07 17:28:43.824985] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.620 17:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75036 00:12:10.880 [2024-12-07 17:28:44.181104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.261 17:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:12.261 17:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.k55tBEVoGQ 00:12:12.261 17:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:12.261 17:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:12.261 17:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:12.261 17:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:12.261 17:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:12.261 17:28:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:12.261 00:12:12.261 real 0m4.816s 00:12:12.261 user 0m5.429s 00:12:12.261 sys 0m0.728s 00:12:12.261 ************************************ 00:12:12.261 END TEST raid_read_error_test 00:12:12.261 ************************************ 00:12:12.261 17:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.261 17:28:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 17:28:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:12.261 17:28:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:12.261 17:28:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.261 17:28:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.261 ************************************ 00:12:12.261 START TEST raid_write_error_test 00:12:12.261 ************************************ 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1bIdswjqRv 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75183 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75183 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75183 ']' 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.261 17:28:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.520 [2024-12-07 17:28:45.669970] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:12.520 [2024-12-07 17:28:45.670183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75183 ] 00:12:12.520 [2024-12-07 17:28:45.833066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.779 [2024-12-07 17:28:45.967872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.039 [2024-12-07 17:28:46.203493] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.039 [2024-12-07 17:28:46.203607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 BaseBdev1_malloc 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 true 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 [2024-12-07 17:28:46.563548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:13.298 [2024-12-07 17:28:46.563691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.298 [2024-12-07 17:28:46.563731] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:13.298 [2024-12-07 17:28:46.563765] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.298 [2024-12-07 17:28:46.566219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.298 [2024-12-07 17:28:46.566299] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.298 BaseBdev1 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 BaseBdev2_malloc 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 true 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 [2024-12-07 17:28:46.635774] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:13.298 [2024-12-07 17:28:46.635892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.298 [2024-12-07 17:28:46.635912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:13.298 [2024-12-07 17:28:46.635923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.298 [2024-12-07 17:28:46.638243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.298 [2024-12-07 17:28:46.638320] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.298 BaseBdev2 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.558 BaseBdev3_malloc 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.558 true 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.558 [2024-12-07 17:28:46.721865] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:13.558 [2024-12-07 17:28:46.721995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.558 [2024-12-07 17:28:46.722031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:13.558 [2024-12-07 17:28:46.722065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.558 [2024-12-07 17:28:46.724517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.558 [2024-12-07 17:28:46.724592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:13.558 BaseBdev3 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.558 BaseBdev4_malloc 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.558 true 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.558 [2024-12-07 17:28:46.794919] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:13.558 [2024-12-07 17:28:46.795051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.558 [2024-12-07 17:28:46.795086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:13.558 [2024-12-07 17:28:46.795119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.558 [2024-12-07 17:28:46.797490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.558 [2024-12-07 17:28:46.797582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:13.558 BaseBdev4 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.558 [2024-12-07 17:28:46.806981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.558 [2024-12-07 17:28:46.809118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.558 [2024-12-07 17:28:46.809233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.558 [2024-12-07 17:28:46.809326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:13.558 [2024-12-07 17:28:46.809593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:13.558 [2024-12-07 17:28:46.809642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.558 [2024-12-07 17:28:46.809902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:13.558 [2024-12-07 17:28:46.810124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:13.558 [2024-12-07 17:28:46.810165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:13.558 [2024-12-07 17:28:46.810340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.558 "name": "raid_bdev1", 00:12:13.558 "uuid": "69167b5d-2853-48ea-8180-3823871cc0ad", 00:12:13.558 "strip_size_kb": 0, 00:12:13.558 "state": "online", 00:12:13.558 "raid_level": "raid1", 00:12:13.558 "superblock": true, 00:12:13.558 "num_base_bdevs": 4, 00:12:13.558 "num_base_bdevs_discovered": 4, 00:12:13.558 "num_base_bdevs_operational": 4, 00:12:13.558 "base_bdevs_list": [ 00:12:13.558 { 00:12:13.558 "name": "BaseBdev1", 00:12:13.558 "uuid": "d375bae6-cbd5-56e6-9643-dbb1e5676395", 00:12:13.558 "is_configured": true, 00:12:13.558 "data_offset": 2048, 00:12:13.558 "data_size": 63488 00:12:13.558 }, 00:12:13.558 { 00:12:13.558 "name": "BaseBdev2", 00:12:13.558 "uuid": "3e5f9821-d0e7-59e9-99c9-73b71658a67b", 00:12:13.558 "is_configured": true, 00:12:13.558 "data_offset": 2048, 00:12:13.558 "data_size": 63488 00:12:13.558 }, 00:12:13.558 { 00:12:13.558 "name": "BaseBdev3", 00:12:13.558 "uuid": "63e3bb9a-271c-5ca7-8b93-80edf59d8c7f", 00:12:13.558 "is_configured": true, 00:12:13.558 "data_offset": 2048, 00:12:13.558 "data_size": 63488 00:12:13.558 }, 00:12:13.558 { 00:12:13.558 "name": "BaseBdev4", 00:12:13.558 "uuid": "74758ddc-cc83-513c-a114-2261d5518e85", 00:12:13.558 "is_configured": true, 00:12:13.558 "data_offset": 2048, 00:12:13.558 "data_size": 63488 00:12:13.558 } 00:12:13.558 ] 00:12:13.558 }' 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.558 17:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.124 17:28:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:14.124 17:28:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:14.124 [2024-12-07 17:28:47.319691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.059 [2024-12-07 17:28:48.234829] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:15.059 [2024-12-07 17:28:48.235014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.059 [2024-12-07 17:28:48.235336] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.059 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.060 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.060 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.060 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.060 "name": "raid_bdev1", 00:12:15.060 "uuid": "69167b5d-2853-48ea-8180-3823871cc0ad", 00:12:15.060 "strip_size_kb": 0, 00:12:15.060 "state": "online", 00:12:15.060 "raid_level": "raid1", 00:12:15.060 "superblock": true, 00:12:15.060 "num_base_bdevs": 4, 00:12:15.060 "num_base_bdevs_discovered": 3, 00:12:15.060 "num_base_bdevs_operational": 3, 00:12:15.060 "base_bdevs_list": [ 00:12:15.060 { 00:12:15.060 "name": null, 00:12:15.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.060 "is_configured": false, 00:12:15.060 "data_offset": 0, 00:12:15.060 "data_size": 63488 00:12:15.060 }, 00:12:15.060 { 00:12:15.060 "name": "BaseBdev2", 00:12:15.060 "uuid": "3e5f9821-d0e7-59e9-99c9-73b71658a67b", 00:12:15.060 "is_configured": true, 00:12:15.060 "data_offset": 2048, 00:12:15.060 "data_size": 63488 00:12:15.060 }, 00:12:15.060 { 00:12:15.060 "name": "BaseBdev3", 00:12:15.060 "uuid": "63e3bb9a-271c-5ca7-8b93-80edf59d8c7f", 00:12:15.060 "is_configured": true, 00:12:15.060 "data_offset": 2048, 00:12:15.060 "data_size": 63488 00:12:15.060 }, 00:12:15.060 { 00:12:15.060 "name": "BaseBdev4", 00:12:15.060 "uuid": "74758ddc-cc83-513c-a114-2261d5518e85", 00:12:15.060 "is_configured": true, 00:12:15.060 "data_offset": 2048, 00:12:15.060 "data_size": 63488 00:12:15.060 } 00:12:15.060 ] 00:12:15.060 }' 00:12:15.060 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.060 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.318 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.318 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.318 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.318 [2024-12-07 17:28:48.688796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.318 [2024-12-07 17:28:48.688910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.318 [2024-12-07 17:28:48.691582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.318 [2024-12-07 17:28:48.691671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.318 [2024-12-07 17:28:48.691828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.318 [2024-12-07 17:28:48.691877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:15.318 { 00:12:15.318 "results": [ 00:12:15.318 { 00:12:15.318 "job": "raid_bdev1", 00:12:15.318 "core_mask": "0x1", 00:12:15.318 "workload": "randrw", 00:12:15.318 "percentage": 50, 00:12:15.318 "status": "finished", 00:12:15.318 "queue_depth": 1, 00:12:15.318 "io_size": 131072, 00:12:15.318 "runtime": 1.369812, 00:12:15.318 "iops": 8585.119709857996, 00:12:15.318 "mibps": 1073.1399637322495, 00:12:15.318 "io_failed": 0, 00:12:15.318 "io_timeout": 0, 00:12:15.318 "avg_latency_us": 113.86241273802096, 00:12:15.319 "min_latency_us": 23.252401746724892, 00:12:15.319 "max_latency_us": 1330.7528384279476 00:12:15.319 } 00:12:15.319 ], 00:12:15.319 "core_count": 1 00:12:15.319 } 00:12:15.319 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.319 17:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75183 00:12:15.319 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75183 ']' 00:12:15.319 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75183 00:12:15.319 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:15.587 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.587 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75183 00:12:15.587 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.587 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.587 killing process with pid 75183 00:12:15.587 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75183' 00:12:15.587 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75183 00:12:15.587 [2024-12-07 17:28:48.734300] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.587 17:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75183 00:12:15.846 [2024-12-07 17:28:49.088193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.231 17:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1bIdswjqRv 00:12:17.231 17:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:17.231 17:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:17.231 17:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:17.231 17:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:17.231 17:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.231 17:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.231 17:28:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:17.231 00:12:17.231 real 0m4.833s 00:12:17.231 user 0m5.538s 00:12:17.231 sys 0m0.700s 00:12:17.231 17:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.231 ************************************ 00:12:17.231 END TEST raid_write_error_test 00:12:17.231 ************************************ 00:12:17.231 17:28:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.231 17:28:50 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:17.231 17:28:50 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:17.231 17:28:50 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:17.231 17:28:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:17.231 17:28:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.231 17:28:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.231 ************************************ 00:12:17.231 START TEST raid_rebuild_test 00:12:17.231 ************************************ 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75327 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75327 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75327 ']' 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.231 17:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.231 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:17.231 Zero copy mechanism will not be used. 00:12:17.231 [2024-12-07 17:28:50.563317] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:17.231 [2024-12-07 17:28:50.563444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75327 ] 00:12:17.490 [2024-12-07 17:28:50.740995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.751 [2024-12-07 17:28:50.875516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.751 [2024-12-07 17:28:51.113276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.751 [2024-12-07 17:28:51.113336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.320 BaseBdev1_malloc 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.320 [2024-12-07 17:28:51.452180] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:18.320 [2024-12-07 17:28:51.452336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.320 [2024-12-07 17:28:51.452386] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:18.320 [2024-12-07 17:28:51.452419] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.320 [2024-12-07 17:28:51.454779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.320 [2024-12-07 17:28:51.454854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:18.320 BaseBdev1 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.320 BaseBdev2_malloc 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.320 [2024-12-07 17:28:51.511480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:18.320 [2024-12-07 17:28:51.511601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.320 [2024-12-07 17:28:51.511651] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:18.320 [2024-12-07 17:28:51.511685] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.320 [2024-12-07 17:28:51.514040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.320 [2024-12-07 17:28:51.514112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:18.320 BaseBdev2 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.320 spare_malloc 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.320 spare_delay 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.320 [2024-12-07 17:28:51.597610] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:18.320 [2024-12-07 17:28:51.597740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.320 [2024-12-07 17:28:51.597778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:18.320 [2024-12-07 17:28:51.597813] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.320 [2024-12-07 17:28:51.600316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.320 [2024-12-07 17:28:51.600392] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:18.320 spare 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.320 [2024-12-07 17:28:51.609630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.320 [2024-12-07 17:28:51.611672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.320 [2024-12-07 17:28:51.611801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:18.320 [2024-12-07 17:28:51.611843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:18.320 [2024-12-07 17:28:51.612105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:18.320 [2024-12-07 17:28:51.612311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:18.320 [2024-12-07 17:28:51.612326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:18.320 [2024-12-07 17:28:51.612462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.320 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.321 "name": "raid_bdev1", 00:12:18.321 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:18.321 "strip_size_kb": 0, 00:12:18.321 "state": "online", 00:12:18.321 "raid_level": "raid1", 00:12:18.321 "superblock": false, 00:12:18.321 "num_base_bdevs": 2, 00:12:18.321 "num_base_bdevs_discovered": 2, 00:12:18.321 "num_base_bdevs_operational": 2, 00:12:18.321 "base_bdevs_list": [ 00:12:18.321 { 00:12:18.321 "name": "BaseBdev1", 00:12:18.321 "uuid": "2014e816-e558-5932-a10b-681e55d26cde", 00:12:18.321 "is_configured": true, 00:12:18.321 "data_offset": 0, 00:12:18.321 "data_size": 65536 00:12:18.321 }, 00:12:18.321 { 00:12:18.321 "name": "BaseBdev2", 00:12:18.321 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:18.321 "is_configured": true, 00:12:18.321 "data_offset": 0, 00:12:18.321 "data_size": 65536 00:12:18.321 } 00:12:18.321 ] 00:12:18.321 }' 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.321 17:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:18.889 [2024-12-07 17:28:52.093173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.889 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:19.149 [2024-12-07 17:28:52.376371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:19.149 /dev/nbd0 00:12:19.149 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.149 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.149 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:19.149 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:19.149 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.149 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.149 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:19.149 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:19.149 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.149 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.150 1+0 records in 00:12:19.150 1+0 records out 00:12:19.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546246 s, 7.5 MB/s 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:19.150 17:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:23.348 65536+0 records in 00:12:23.348 65536+0 records out 00:12:23.348 33554432 bytes (34 MB, 32 MiB) copied, 4.18684 s, 8.0 MB/s 00:12:23.348 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:23.348 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.348 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:23.348 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.349 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:23.349 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.349 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:23.608 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:23.608 [2024-12-07 17:28:56.850088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.608 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.609 [2024-12-07 17:28:56.866158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.609 "name": "raid_bdev1", 00:12:23.609 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:23.609 "strip_size_kb": 0, 00:12:23.609 "state": "online", 00:12:23.609 "raid_level": "raid1", 00:12:23.609 "superblock": false, 00:12:23.609 "num_base_bdevs": 2, 00:12:23.609 "num_base_bdevs_discovered": 1, 00:12:23.609 "num_base_bdevs_operational": 1, 00:12:23.609 "base_bdevs_list": [ 00:12:23.609 { 00:12:23.609 "name": null, 00:12:23.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.609 "is_configured": false, 00:12:23.609 "data_offset": 0, 00:12:23.609 "data_size": 65536 00:12:23.609 }, 00:12:23.609 { 00:12:23.609 "name": "BaseBdev2", 00:12:23.609 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:23.609 "is_configured": true, 00:12:23.609 "data_offset": 0, 00:12:23.609 "data_size": 65536 00:12:23.609 } 00:12:23.609 ] 00:12:23.609 }' 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.609 17:28:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.178 17:28:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.178 17:28:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.178 17:28:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.178 [2024-12-07 17:28:57.329426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.178 [2024-12-07 17:28:57.347971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:24.178 17:28:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.178 17:28:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:24.178 [2024-12-07 17:28:57.350044] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.117 "name": "raid_bdev1", 00:12:25.117 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:25.117 "strip_size_kb": 0, 00:12:25.117 "state": "online", 00:12:25.117 "raid_level": "raid1", 00:12:25.117 "superblock": false, 00:12:25.117 "num_base_bdevs": 2, 00:12:25.117 "num_base_bdevs_discovered": 2, 00:12:25.117 "num_base_bdevs_operational": 2, 00:12:25.117 "process": { 00:12:25.117 "type": "rebuild", 00:12:25.117 "target": "spare", 00:12:25.117 "progress": { 00:12:25.117 "blocks": 20480, 00:12:25.117 "percent": 31 00:12:25.117 } 00:12:25.117 }, 00:12:25.117 "base_bdevs_list": [ 00:12:25.117 { 00:12:25.117 "name": "spare", 00:12:25.117 "uuid": "c4f19184-142c-5b0d-838c-4dd819b5435a", 00:12:25.117 "is_configured": true, 00:12:25.117 "data_offset": 0, 00:12:25.117 "data_size": 65536 00:12:25.117 }, 00:12:25.117 { 00:12:25.117 "name": "BaseBdev2", 00:12:25.117 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:25.117 "is_configured": true, 00:12:25.117 "data_offset": 0, 00:12:25.117 "data_size": 65536 00:12:25.117 } 00:12:25.117 ] 00:12:25.117 }' 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.117 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.377 [2024-12-07 17:28:58.510204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.377 [2024-12-07 17:28:58.555275] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:25.377 [2024-12-07 17:28:58.555380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.377 [2024-12-07 17:28:58.555416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.377 [2024-12-07 17:28:58.555431] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.377 "name": "raid_bdev1", 00:12:25.377 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:25.377 "strip_size_kb": 0, 00:12:25.377 "state": "online", 00:12:25.377 "raid_level": "raid1", 00:12:25.377 "superblock": false, 00:12:25.377 "num_base_bdevs": 2, 00:12:25.377 "num_base_bdevs_discovered": 1, 00:12:25.377 "num_base_bdevs_operational": 1, 00:12:25.377 "base_bdevs_list": [ 00:12:25.377 { 00:12:25.377 "name": null, 00:12:25.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.377 "is_configured": false, 00:12:25.377 "data_offset": 0, 00:12:25.377 "data_size": 65536 00:12:25.377 }, 00:12:25.377 { 00:12:25.377 "name": "BaseBdev2", 00:12:25.377 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:25.377 "is_configured": true, 00:12:25.377 "data_offset": 0, 00:12:25.377 "data_size": 65536 00:12:25.377 } 00:12:25.377 ] 00:12:25.377 }' 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.377 17:28:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.948 "name": "raid_bdev1", 00:12:25.948 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:25.948 "strip_size_kb": 0, 00:12:25.948 "state": "online", 00:12:25.948 "raid_level": "raid1", 00:12:25.948 "superblock": false, 00:12:25.948 "num_base_bdevs": 2, 00:12:25.948 "num_base_bdevs_discovered": 1, 00:12:25.948 "num_base_bdevs_operational": 1, 00:12:25.948 "base_bdevs_list": [ 00:12:25.948 { 00:12:25.948 "name": null, 00:12:25.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.948 "is_configured": false, 00:12:25.948 "data_offset": 0, 00:12:25.948 "data_size": 65536 00:12:25.948 }, 00:12:25.948 { 00:12:25.948 "name": "BaseBdev2", 00:12:25.948 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:25.948 "is_configured": true, 00:12:25.948 "data_offset": 0, 00:12:25.948 "data_size": 65536 00:12:25.948 } 00:12:25.948 ] 00:12:25.948 }' 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.948 [2024-12-07 17:28:59.160927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:25.948 [2024-12-07 17:28:59.177097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.948 17:28:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:25.948 [2024-12-07 17:28:59.179010] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.888 "name": "raid_bdev1", 00:12:26.888 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:26.888 "strip_size_kb": 0, 00:12:26.888 "state": "online", 00:12:26.888 "raid_level": "raid1", 00:12:26.888 "superblock": false, 00:12:26.888 "num_base_bdevs": 2, 00:12:26.888 "num_base_bdevs_discovered": 2, 00:12:26.888 "num_base_bdevs_operational": 2, 00:12:26.888 "process": { 00:12:26.888 "type": "rebuild", 00:12:26.888 "target": "spare", 00:12:26.888 "progress": { 00:12:26.888 "blocks": 20480, 00:12:26.888 "percent": 31 00:12:26.888 } 00:12:26.888 }, 00:12:26.888 "base_bdevs_list": [ 00:12:26.888 { 00:12:26.888 "name": "spare", 00:12:26.888 "uuid": "c4f19184-142c-5b0d-838c-4dd819b5435a", 00:12:26.888 "is_configured": true, 00:12:26.888 "data_offset": 0, 00:12:26.888 "data_size": 65536 00:12:26.888 }, 00:12:26.888 { 00:12:26.888 "name": "BaseBdev2", 00:12:26.888 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:26.888 "is_configured": true, 00:12:26.888 "data_offset": 0, 00:12:26.888 "data_size": 65536 00:12:26.888 } 00:12:26.888 ] 00:12:26.888 }' 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.888 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=371 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.148 "name": "raid_bdev1", 00:12:27.148 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:27.148 "strip_size_kb": 0, 00:12:27.148 "state": "online", 00:12:27.148 "raid_level": "raid1", 00:12:27.148 "superblock": false, 00:12:27.148 "num_base_bdevs": 2, 00:12:27.148 "num_base_bdevs_discovered": 2, 00:12:27.148 "num_base_bdevs_operational": 2, 00:12:27.148 "process": { 00:12:27.148 "type": "rebuild", 00:12:27.148 "target": "spare", 00:12:27.148 "progress": { 00:12:27.148 "blocks": 22528, 00:12:27.148 "percent": 34 00:12:27.148 } 00:12:27.148 }, 00:12:27.148 "base_bdevs_list": [ 00:12:27.148 { 00:12:27.148 "name": "spare", 00:12:27.148 "uuid": "c4f19184-142c-5b0d-838c-4dd819b5435a", 00:12:27.148 "is_configured": true, 00:12:27.148 "data_offset": 0, 00:12:27.148 "data_size": 65536 00:12:27.148 }, 00:12:27.148 { 00:12:27.148 "name": "BaseBdev2", 00:12:27.148 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:27.148 "is_configured": true, 00:12:27.148 "data_offset": 0, 00:12:27.148 "data_size": 65536 00:12:27.148 } 00:12:27.148 ] 00:12:27.148 }' 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.148 17:29:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.101 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.101 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.101 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.101 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.101 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.101 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.101 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.101 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.101 17:29:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.101 17:29:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.373 17:29:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.373 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.373 "name": "raid_bdev1", 00:12:28.373 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:28.373 "strip_size_kb": 0, 00:12:28.373 "state": "online", 00:12:28.373 "raid_level": "raid1", 00:12:28.373 "superblock": false, 00:12:28.373 "num_base_bdevs": 2, 00:12:28.373 "num_base_bdevs_discovered": 2, 00:12:28.373 "num_base_bdevs_operational": 2, 00:12:28.373 "process": { 00:12:28.373 "type": "rebuild", 00:12:28.373 "target": "spare", 00:12:28.373 "progress": { 00:12:28.373 "blocks": 45056, 00:12:28.373 "percent": 68 00:12:28.373 } 00:12:28.373 }, 00:12:28.373 "base_bdevs_list": [ 00:12:28.373 { 00:12:28.373 "name": "spare", 00:12:28.373 "uuid": "c4f19184-142c-5b0d-838c-4dd819b5435a", 00:12:28.373 "is_configured": true, 00:12:28.373 "data_offset": 0, 00:12:28.373 "data_size": 65536 00:12:28.373 }, 00:12:28.373 { 00:12:28.373 "name": "BaseBdev2", 00:12:28.373 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:28.373 "is_configured": true, 00:12:28.373 "data_offset": 0, 00:12:28.373 "data_size": 65536 00:12:28.373 } 00:12:28.373 ] 00:12:28.373 }' 00:12:28.373 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.373 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.373 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.373 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.373 17:29:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.312 [2024-12-07 17:29:02.393932] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:29.312 [2024-12-07 17:29:02.394016] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:29.312 [2024-12-07 17:29:02.394076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.312 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.312 "name": "raid_bdev1", 00:12:29.312 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:29.312 "strip_size_kb": 0, 00:12:29.312 "state": "online", 00:12:29.312 "raid_level": "raid1", 00:12:29.312 "superblock": false, 00:12:29.312 "num_base_bdevs": 2, 00:12:29.312 "num_base_bdevs_discovered": 2, 00:12:29.312 "num_base_bdevs_operational": 2, 00:12:29.312 "base_bdevs_list": [ 00:12:29.312 { 00:12:29.312 "name": "spare", 00:12:29.312 "uuid": "c4f19184-142c-5b0d-838c-4dd819b5435a", 00:12:29.312 "is_configured": true, 00:12:29.312 "data_offset": 0, 00:12:29.312 "data_size": 65536 00:12:29.312 }, 00:12:29.312 { 00:12:29.312 "name": "BaseBdev2", 00:12:29.312 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:29.312 "is_configured": true, 00:12:29.312 "data_offset": 0, 00:12:29.312 "data_size": 65536 00:12:29.312 } 00:12:29.312 ] 00:12:29.312 }' 00:12:29.313 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.313 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:29.313 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.572 17:29:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.573 "name": "raid_bdev1", 00:12:29.573 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:29.573 "strip_size_kb": 0, 00:12:29.573 "state": "online", 00:12:29.573 "raid_level": "raid1", 00:12:29.573 "superblock": false, 00:12:29.573 "num_base_bdevs": 2, 00:12:29.573 "num_base_bdevs_discovered": 2, 00:12:29.573 "num_base_bdevs_operational": 2, 00:12:29.573 "base_bdevs_list": [ 00:12:29.573 { 00:12:29.573 "name": "spare", 00:12:29.573 "uuid": "c4f19184-142c-5b0d-838c-4dd819b5435a", 00:12:29.573 "is_configured": true, 00:12:29.573 "data_offset": 0, 00:12:29.573 "data_size": 65536 00:12:29.573 }, 00:12:29.573 { 00:12:29.573 "name": "BaseBdev2", 00:12:29.573 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:29.573 "is_configured": true, 00:12:29.573 "data_offset": 0, 00:12:29.573 "data_size": 65536 00:12:29.573 } 00:12:29.573 ] 00:12:29.573 }' 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.573 "name": "raid_bdev1", 00:12:29.573 "uuid": "50d8cffb-da5d-4826-a791-dc57611b92cc", 00:12:29.573 "strip_size_kb": 0, 00:12:29.573 "state": "online", 00:12:29.573 "raid_level": "raid1", 00:12:29.573 "superblock": false, 00:12:29.573 "num_base_bdevs": 2, 00:12:29.573 "num_base_bdevs_discovered": 2, 00:12:29.573 "num_base_bdevs_operational": 2, 00:12:29.573 "base_bdevs_list": [ 00:12:29.573 { 00:12:29.573 "name": "spare", 00:12:29.573 "uuid": "c4f19184-142c-5b0d-838c-4dd819b5435a", 00:12:29.573 "is_configured": true, 00:12:29.573 "data_offset": 0, 00:12:29.573 "data_size": 65536 00:12:29.573 }, 00:12:29.573 { 00:12:29.573 "name": "BaseBdev2", 00:12:29.573 "uuid": "c0a104eb-1155-5250-8dbd-987af14952ed", 00:12:29.573 "is_configured": true, 00:12:29.573 "data_offset": 0, 00:12:29.573 "data_size": 65536 00:12:29.573 } 00:12:29.573 ] 00:12:29.573 }' 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.573 17:29:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.142 [2024-12-07 17:29:03.299489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.142 [2024-12-07 17:29:03.299575] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.142 [2024-12-07 17:29:03.299688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.142 [2024-12-07 17:29:03.299803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.142 [2024-12-07 17:29:03.299887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.142 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:30.402 /dev/nbd0 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.402 1+0 records in 00:12:30.402 1+0 records out 00:12:30.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051782 s, 7.9 MB/s 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.402 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:30.662 /dev/nbd1 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.662 1+0 records in 00:12:30.662 1+0 records out 00:12:30.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453149 s, 9.0 MB/s 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.662 17:29:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:30.662 17:29:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:30.662 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.662 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.662 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.662 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:30.662 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.662 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.922 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.922 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.922 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.922 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.922 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.922 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.922 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:30.922 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.922 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.922 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75327 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75327 ']' 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75327 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75327 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75327' 00:12:31.182 killing process with pid 75327 00:12:31.182 Received shutdown signal, test time was about 60.000000 seconds 00:12:31.182 00:12:31.182 Latency(us) 00:12:31.182 [2024-12-07T17:29:04.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.182 [2024-12-07T17:29:04.564Z] =================================================================================================================== 00:12:31.182 [2024-12-07T17:29:04.564Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75327 00:12:31.182 [2024-12-07 17:29:04.508136] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.182 17:29:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75327 00:12:31.441 [2024-12-07 17:29:04.803515] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:32.863 00:12:32.863 real 0m15.462s 00:12:32.863 user 0m17.335s 00:12:32.863 sys 0m3.219s 00:12:32.863 ************************************ 00:12:32.863 END TEST raid_rebuild_test 00:12:32.863 ************************************ 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.863 17:29:05 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:32.863 17:29:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:32.863 17:29:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.863 17:29:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.863 ************************************ 00:12:32.863 START TEST raid_rebuild_test_sb 00:12:32.863 ************************************ 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:32.863 17:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75745 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75745 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75745 ']' 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.863 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.863 [2024-12-07 17:29:06.089136] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:32.863 [2024-12-07 17:29:06.089322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:32.863 Zero copy mechanism will not be used. 00:12:32.863 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75745 ] 00:12:33.122 [2024-12-07 17:29:06.262691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.122 [2024-12-07 17:29:06.371048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.381 [2024-12-07 17:29:06.568535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.381 [2024-12-07 17:29:06.568579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.639 BaseBdev1_malloc 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.639 [2024-12-07 17:29:06.971971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:33.639 [2024-12-07 17:29:06.972104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.639 [2024-12-07 17:29:06.972148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:33.639 [2024-12-07 17:29:06.972181] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.639 [2024-12-07 17:29:06.974441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.639 [2024-12-07 17:29:06.974519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.639 BaseBdev1 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.639 17:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 BaseBdev2_malloc 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 [2024-12-07 17:29:07.027940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:33.898 [2024-12-07 17:29:07.028059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.898 [2024-12-07 17:29:07.028100] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:33.898 [2024-12-07 17:29:07.028131] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.898 [2024-12-07 17:29:07.030268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.898 [2024-12-07 17:29:07.030364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.898 BaseBdev2 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 spare_malloc 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 spare_delay 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 [2024-12-07 17:29:07.108807] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.898 [2024-12-07 17:29:07.108910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.898 [2024-12-07 17:29:07.108977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:33.898 [2024-12-07 17:29:07.109015] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.898 [2024-12-07 17:29:07.111043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.898 [2024-12-07 17:29:07.111117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.898 spare 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 [2024-12-07 17:29:07.120843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.898 [2024-12-07 17:29:07.122527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.898 [2024-12-07 17:29:07.122684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:33.898 [2024-12-07 17:29:07.122699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.898 [2024-12-07 17:29:07.122939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:33.898 [2024-12-07 17:29:07.123101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:33.898 [2024-12-07 17:29:07.123110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:33.898 [2024-12-07 17:29:07.123268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.898 "name": "raid_bdev1", 00:12:33.898 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:33.898 "strip_size_kb": 0, 00:12:33.898 "state": "online", 00:12:33.898 "raid_level": "raid1", 00:12:33.898 "superblock": true, 00:12:33.898 "num_base_bdevs": 2, 00:12:33.898 "num_base_bdevs_discovered": 2, 00:12:33.898 "num_base_bdevs_operational": 2, 00:12:33.898 "base_bdevs_list": [ 00:12:33.898 { 00:12:33.898 "name": "BaseBdev1", 00:12:33.898 "uuid": "6bd13070-3973-58e2-a240-42f2b1caf9ac", 00:12:33.898 "is_configured": true, 00:12:33.898 "data_offset": 2048, 00:12:33.898 "data_size": 63488 00:12:33.898 }, 00:12:33.898 { 00:12:33.898 "name": "BaseBdev2", 00:12:33.898 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:33.898 "is_configured": true, 00:12:33.898 "data_offset": 2048, 00:12:33.898 "data_size": 63488 00:12:33.898 } 00:12:33.898 ] 00:12:33.898 }' 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.898 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.466 [2024-12-07 17:29:07.560379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:34.466 [2024-12-07 17:29:07.795801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:34.466 /dev/nbd0 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.466 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.724 1+0 records in 00:12:34.724 1+0 records out 00:12:34.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356525 s, 11.5 MB/s 00:12:34.724 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.724 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:34.724 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.724 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:34.724 17:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:34.724 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.724 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:34.724 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:34.724 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:34.724 17:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:38.932 63488+0 records in 00:12:38.932 63488+0 records out 00:12:38.932 32505856 bytes (33 MB, 31 MiB) copied, 4.33298 s, 7.5 MB/s 00:12:38.932 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:38.932 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.932 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:38.932 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.932 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:38.932 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.932 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:39.209 [2024-12-07 17:29:12.389314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.209 [2024-12-07 17:29:12.425779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.209 "name": "raid_bdev1", 00:12:39.209 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:39.209 "strip_size_kb": 0, 00:12:39.209 "state": "online", 00:12:39.209 "raid_level": "raid1", 00:12:39.209 "superblock": true, 00:12:39.209 "num_base_bdevs": 2, 00:12:39.209 "num_base_bdevs_discovered": 1, 00:12:39.209 "num_base_bdevs_operational": 1, 00:12:39.209 "base_bdevs_list": [ 00:12:39.209 { 00:12:39.209 "name": null, 00:12:39.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.209 "is_configured": false, 00:12:39.209 "data_offset": 0, 00:12:39.209 "data_size": 63488 00:12:39.209 }, 00:12:39.209 { 00:12:39.209 "name": "BaseBdev2", 00:12:39.209 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:39.209 "is_configured": true, 00:12:39.209 "data_offset": 2048, 00:12:39.209 "data_size": 63488 00:12:39.209 } 00:12:39.209 ] 00:12:39.209 }' 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.209 17:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.780 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.780 17:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.780 17:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.780 [2024-12-07 17:29:12.873032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.780 [2024-12-07 17:29:12.889703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:39.780 17:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.780 [2024-12-07 17:29:12.891659] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.780 17:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.732 "name": "raid_bdev1", 00:12:40.732 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:40.732 "strip_size_kb": 0, 00:12:40.732 "state": "online", 00:12:40.732 "raid_level": "raid1", 00:12:40.732 "superblock": true, 00:12:40.732 "num_base_bdevs": 2, 00:12:40.732 "num_base_bdevs_discovered": 2, 00:12:40.732 "num_base_bdevs_operational": 2, 00:12:40.732 "process": { 00:12:40.732 "type": "rebuild", 00:12:40.732 "target": "spare", 00:12:40.732 "progress": { 00:12:40.732 "blocks": 20480, 00:12:40.732 "percent": 32 00:12:40.732 } 00:12:40.732 }, 00:12:40.732 "base_bdevs_list": [ 00:12:40.732 { 00:12:40.732 "name": "spare", 00:12:40.732 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:40.732 "is_configured": true, 00:12:40.732 "data_offset": 2048, 00:12:40.732 "data_size": 63488 00:12:40.732 }, 00:12:40.732 { 00:12:40.732 "name": "BaseBdev2", 00:12:40.732 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:40.732 "is_configured": true, 00:12:40.732 "data_offset": 2048, 00:12:40.732 "data_size": 63488 00:12:40.732 } 00:12:40.732 ] 00:12:40.732 }' 00:12:40.732 17:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.732 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.732 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.732 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.732 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:40.732 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.732 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.732 [2024-12-07 17:29:14.062924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.732 [2024-12-07 17:29:14.097037] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.732 [2024-12-07 17:29:14.097157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.732 [2024-12-07 17:29:14.097193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.732 [2024-12-07 17:29:14.097219] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.992 "name": "raid_bdev1", 00:12:40.992 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:40.992 "strip_size_kb": 0, 00:12:40.992 "state": "online", 00:12:40.992 "raid_level": "raid1", 00:12:40.992 "superblock": true, 00:12:40.992 "num_base_bdevs": 2, 00:12:40.992 "num_base_bdevs_discovered": 1, 00:12:40.992 "num_base_bdevs_operational": 1, 00:12:40.992 "base_bdevs_list": [ 00:12:40.992 { 00:12:40.992 "name": null, 00:12:40.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.992 "is_configured": false, 00:12:40.992 "data_offset": 0, 00:12:40.992 "data_size": 63488 00:12:40.992 }, 00:12:40.992 { 00:12:40.992 "name": "BaseBdev2", 00:12:40.992 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:40.992 "is_configured": true, 00:12:40.992 "data_offset": 2048, 00:12:40.992 "data_size": 63488 00:12:40.992 } 00:12:40.992 ] 00:12:40.992 }' 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.992 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.252 "name": "raid_bdev1", 00:12:41.252 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:41.252 "strip_size_kb": 0, 00:12:41.252 "state": "online", 00:12:41.252 "raid_level": "raid1", 00:12:41.252 "superblock": true, 00:12:41.252 "num_base_bdevs": 2, 00:12:41.252 "num_base_bdevs_discovered": 1, 00:12:41.252 "num_base_bdevs_operational": 1, 00:12:41.252 "base_bdevs_list": [ 00:12:41.252 { 00:12:41.252 "name": null, 00:12:41.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.252 "is_configured": false, 00:12:41.252 "data_offset": 0, 00:12:41.252 "data_size": 63488 00:12:41.252 }, 00:12:41.252 { 00:12:41.252 "name": "BaseBdev2", 00:12:41.252 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:41.252 "is_configured": true, 00:12:41.252 "data_offset": 2048, 00:12:41.252 "data_size": 63488 00:12:41.252 } 00:12:41.252 ] 00:12:41.252 }' 00:12:41.252 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.512 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.512 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.512 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.512 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:41.512 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.512 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.512 [2024-12-07 17:29:14.723533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:41.512 [2024-12-07 17:29:14.739497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:41.512 17:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.512 17:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:41.512 [2024-12-07 17:29:14.741406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.449 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.449 "name": "raid_bdev1", 00:12:42.449 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:42.449 "strip_size_kb": 0, 00:12:42.449 "state": "online", 00:12:42.449 "raid_level": "raid1", 00:12:42.449 "superblock": true, 00:12:42.449 "num_base_bdevs": 2, 00:12:42.449 "num_base_bdevs_discovered": 2, 00:12:42.449 "num_base_bdevs_operational": 2, 00:12:42.449 "process": { 00:12:42.449 "type": "rebuild", 00:12:42.449 "target": "spare", 00:12:42.449 "progress": { 00:12:42.449 "blocks": 20480, 00:12:42.449 "percent": 32 00:12:42.449 } 00:12:42.449 }, 00:12:42.449 "base_bdevs_list": [ 00:12:42.449 { 00:12:42.449 "name": "spare", 00:12:42.449 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:42.449 "is_configured": true, 00:12:42.450 "data_offset": 2048, 00:12:42.450 "data_size": 63488 00:12:42.450 }, 00:12:42.450 { 00:12:42.450 "name": "BaseBdev2", 00:12:42.450 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:42.450 "is_configured": true, 00:12:42.450 "data_offset": 2048, 00:12:42.450 "data_size": 63488 00:12:42.450 } 00:12:42.450 ] 00:12:42.450 }' 00:12:42.450 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:42.709 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=386 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.709 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.709 "name": "raid_bdev1", 00:12:42.709 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:42.709 "strip_size_kb": 0, 00:12:42.709 "state": "online", 00:12:42.709 "raid_level": "raid1", 00:12:42.709 "superblock": true, 00:12:42.709 "num_base_bdevs": 2, 00:12:42.710 "num_base_bdevs_discovered": 2, 00:12:42.710 "num_base_bdevs_operational": 2, 00:12:42.710 "process": { 00:12:42.710 "type": "rebuild", 00:12:42.710 "target": "spare", 00:12:42.710 "progress": { 00:12:42.710 "blocks": 22528, 00:12:42.710 "percent": 35 00:12:42.710 } 00:12:42.710 }, 00:12:42.710 "base_bdevs_list": [ 00:12:42.710 { 00:12:42.710 "name": "spare", 00:12:42.710 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:42.710 "is_configured": true, 00:12:42.710 "data_offset": 2048, 00:12:42.710 "data_size": 63488 00:12:42.710 }, 00:12:42.710 { 00:12:42.710 "name": "BaseBdev2", 00:12:42.710 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:42.710 "is_configured": true, 00:12:42.710 "data_offset": 2048, 00:12:42.710 "data_size": 63488 00:12:42.710 } 00:12:42.710 ] 00:12:42.710 }' 00:12:42.710 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.710 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.710 17:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.710 17:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.710 17:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.093 "name": "raid_bdev1", 00:12:44.093 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:44.093 "strip_size_kb": 0, 00:12:44.093 "state": "online", 00:12:44.093 "raid_level": "raid1", 00:12:44.093 "superblock": true, 00:12:44.093 "num_base_bdevs": 2, 00:12:44.093 "num_base_bdevs_discovered": 2, 00:12:44.093 "num_base_bdevs_operational": 2, 00:12:44.093 "process": { 00:12:44.093 "type": "rebuild", 00:12:44.093 "target": "spare", 00:12:44.093 "progress": { 00:12:44.093 "blocks": 47104, 00:12:44.093 "percent": 74 00:12:44.093 } 00:12:44.093 }, 00:12:44.093 "base_bdevs_list": [ 00:12:44.093 { 00:12:44.093 "name": "spare", 00:12:44.093 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:44.093 "is_configured": true, 00:12:44.093 "data_offset": 2048, 00:12:44.093 "data_size": 63488 00:12:44.093 }, 00:12:44.093 { 00:12:44.093 "name": "BaseBdev2", 00:12:44.093 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:44.093 "is_configured": true, 00:12:44.093 "data_offset": 2048, 00:12:44.093 "data_size": 63488 00:12:44.093 } 00:12:44.093 ] 00:12:44.093 }' 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.093 17:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:44.664 [2024-12-07 17:29:17.854275] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:44.664 [2024-12-07 17:29:17.854346] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:44.664 [2024-12-07 17:29:17.854471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.925 "name": "raid_bdev1", 00:12:44.925 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:44.925 "strip_size_kb": 0, 00:12:44.925 "state": "online", 00:12:44.925 "raid_level": "raid1", 00:12:44.925 "superblock": true, 00:12:44.925 "num_base_bdevs": 2, 00:12:44.925 "num_base_bdevs_discovered": 2, 00:12:44.925 "num_base_bdevs_operational": 2, 00:12:44.925 "base_bdevs_list": [ 00:12:44.925 { 00:12:44.925 "name": "spare", 00:12:44.925 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:44.925 "is_configured": true, 00:12:44.925 "data_offset": 2048, 00:12:44.925 "data_size": 63488 00:12:44.925 }, 00:12:44.925 { 00:12:44.925 "name": "BaseBdev2", 00:12:44.925 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:44.925 "is_configured": true, 00:12:44.925 "data_offset": 2048, 00:12:44.925 "data_size": 63488 00:12:44.925 } 00:12:44.925 ] 00:12:44.925 }' 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:44.925 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.185 "name": "raid_bdev1", 00:12:45.185 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:45.185 "strip_size_kb": 0, 00:12:45.185 "state": "online", 00:12:45.185 "raid_level": "raid1", 00:12:45.185 "superblock": true, 00:12:45.185 "num_base_bdevs": 2, 00:12:45.185 "num_base_bdevs_discovered": 2, 00:12:45.185 "num_base_bdevs_operational": 2, 00:12:45.185 "base_bdevs_list": [ 00:12:45.185 { 00:12:45.185 "name": "spare", 00:12:45.185 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:45.185 "is_configured": true, 00:12:45.185 "data_offset": 2048, 00:12:45.185 "data_size": 63488 00:12:45.185 }, 00:12:45.185 { 00:12:45.185 "name": "BaseBdev2", 00:12:45.185 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:45.185 "is_configured": true, 00:12:45.185 "data_offset": 2048, 00:12:45.185 "data_size": 63488 00:12:45.185 } 00:12:45.185 ] 00:12:45.185 }' 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.185 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.186 "name": "raid_bdev1", 00:12:45.186 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:45.186 "strip_size_kb": 0, 00:12:45.186 "state": "online", 00:12:45.186 "raid_level": "raid1", 00:12:45.186 "superblock": true, 00:12:45.186 "num_base_bdevs": 2, 00:12:45.186 "num_base_bdevs_discovered": 2, 00:12:45.186 "num_base_bdevs_operational": 2, 00:12:45.186 "base_bdevs_list": [ 00:12:45.186 { 00:12:45.186 "name": "spare", 00:12:45.186 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:45.186 "is_configured": true, 00:12:45.186 "data_offset": 2048, 00:12:45.186 "data_size": 63488 00:12:45.186 }, 00:12:45.186 { 00:12:45.186 "name": "BaseBdev2", 00:12:45.186 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:45.186 "is_configured": true, 00:12:45.186 "data_offset": 2048, 00:12:45.186 "data_size": 63488 00:12:45.186 } 00:12:45.186 ] 00:12:45.186 }' 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.186 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.754 [2024-12-07 17:29:18.904787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.754 [2024-12-07 17:29:18.904868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.754 [2024-12-07 17:29:18.905009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.754 [2024-12-07 17:29:18.905113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.754 [2024-12-07 17:29:18.905163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:45.754 17:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:46.013 /dev/nbd0 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.013 1+0 records in 00:12:46.013 1+0 records out 00:12:46.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048961 s, 8.4 MB/s 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.013 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:46.014 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:46.273 /dev/nbd1 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.273 1+0 records in 00:12:46.273 1+0 records out 00:12:46.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317858 s, 12.9 MB/s 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:46.273 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:46.532 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:46.532 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.533 17:29:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.792 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.792 [2024-12-07 17:29:20.169663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:46.792 [2024-12-07 17:29:20.169772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.792 [2024-12-07 17:29:20.169840] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:46.792 [2024-12-07 17:29:20.169874] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.792 [2024-12-07 17:29:20.172241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.792 [2024-12-07 17:29:20.172315] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:47.052 [2024-12-07 17:29:20.172458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:47.052 [2024-12-07 17:29:20.172557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.052 spare 00:12:47.052 [2024-12-07 17:29:20.172745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.052 [2024-12-07 17:29:20.272650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:47.052 [2024-12-07 17:29:20.272740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.052 [2024-12-07 17:29:20.273131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:47.052 [2024-12-07 17:29:20.273370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:47.052 [2024-12-07 17:29:20.273415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:47.052 [2024-12-07 17:29:20.273663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.052 "name": "raid_bdev1", 00:12:47.052 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:47.052 "strip_size_kb": 0, 00:12:47.052 "state": "online", 00:12:47.052 "raid_level": "raid1", 00:12:47.052 "superblock": true, 00:12:47.052 "num_base_bdevs": 2, 00:12:47.052 "num_base_bdevs_discovered": 2, 00:12:47.052 "num_base_bdevs_operational": 2, 00:12:47.052 "base_bdevs_list": [ 00:12:47.052 { 00:12:47.052 "name": "spare", 00:12:47.052 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:47.052 "is_configured": true, 00:12:47.052 "data_offset": 2048, 00:12:47.052 "data_size": 63488 00:12:47.052 }, 00:12:47.052 { 00:12:47.052 "name": "BaseBdev2", 00:12:47.052 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:47.052 "is_configured": true, 00:12:47.052 "data_offset": 2048, 00:12:47.052 "data_size": 63488 00:12:47.052 } 00:12:47.052 ] 00:12:47.052 }' 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.052 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.621 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.622 "name": "raid_bdev1", 00:12:47.622 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:47.622 "strip_size_kb": 0, 00:12:47.622 "state": "online", 00:12:47.622 "raid_level": "raid1", 00:12:47.622 "superblock": true, 00:12:47.622 "num_base_bdevs": 2, 00:12:47.622 "num_base_bdevs_discovered": 2, 00:12:47.622 "num_base_bdevs_operational": 2, 00:12:47.622 "base_bdevs_list": [ 00:12:47.622 { 00:12:47.622 "name": "spare", 00:12:47.622 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:47.622 "is_configured": true, 00:12:47.622 "data_offset": 2048, 00:12:47.622 "data_size": 63488 00:12:47.622 }, 00:12:47.622 { 00:12:47.622 "name": "BaseBdev2", 00:12:47.622 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:47.622 "is_configured": true, 00:12:47.622 "data_offset": 2048, 00:12:47.622 "data_size": 63488 00:12:47.622 } 00:12:47.622 ] 00:12:47.622 }' 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.622 [2024-12-07 17:29:20.956442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.622 17:29:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.881 17:29:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.881 "name": "raid_bdev1", 00:12:47.881 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:47.881 "strip_size_kb": 0, 00:12:47.881 "state": "online", 00:12:47.881 "raid_level": "raid1", 00:12:47.881 "superblock": true, 00:12:47.881 "num_base_bdevs": 2, 00:12:47.881 "num_base_bdevs_discovered": 1, 00:12:47.881 "num_base_bdevs_operational": 1, 00:12:47.881 "base_bdevs_list": [ 00:12:47.881 { 00:12:47.881 "name": null, 00:12:47.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.881 "is_configured": false, 00:12:47.881 "data_offset": 0, 00:12:47.881 "data_size": 63488 00:12:47.881 }, 00:12:47.881 { 00:12:47.881 "name": "BaseBdev2", 00:12:47.881 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:47.881 "is_configured": true, 00:12:47.881 "data_offset": 2048, 00:12:47.881 "data_size": 63488 00:12:47.881 } 00:12:47.882 ] 00:12:47.882 }' 00:12:47.882 17:29:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.882 17:29:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.140 17:29:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:48.140 17:29:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.140 17:29:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.140 [2024-12-07 17:29:21.427686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.140 [2024-12-07 17:29:21.427973] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:48.140 [2024-12-07 17:29:21.428047] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:48.140 [2024-12-07 17:29:21.428113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.140 [2024-12-07 17:29:21.445400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:48.140 17:29:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.140 17:29:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:48.141 [2024-12-07 17:29:21.447375] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.078 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.078 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.078 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.078 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.078 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.338 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.338 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.338 17:29:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.338 17:29:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.338 17:29:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.338 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.338 "name": "raid_bdev1", 00:12:49.338 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:49.338 "strip_size_kb": 0, 00:12:49.338 "state": "online", 00:12:49.338 "raid_level": "raid1", 00:12:49.338 "superblock": true, 00:12:49.338 "num_base_bdevs": 2, 00:12:49.338 "num_base_bdevs_discovered": 2, 00:12:49.338 "num_base_bdevs_operational": 2, 00:12:49.338 "process": { 00:12:49.338 "type": "rebuild", 00:12:49.338 "target": "spare", 00:12:49.338 "progress": { 00:12:49.338 "blocks": 20480, 00:12:49.338 "percent": 32 00:12:49.338 } 00:12:49.338 }, 00:12:49.338 "base_bdevs_list": [ 00:12:49.338 { 00:12:49.338 "name": "spare", 00:12:49.338 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:49.338 "is_configured": true, 00:12:49.338 "data_offset": 2048, 00:12:49.338 "data_size": 63488 00:12:49.338 }, 00:12:49.339 { 00:12:49.339 "name": "BaseBdev2", 00:12:49.339 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:49.339 "is_configured": true, 00:12:49.339 "data_offset": 2048, 00:12:49.339 "data_size": 63488 00:12:49.339 } 00:12:49.339 ] 00:12:49.339 }' 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.339 [2024-12-07 17:29:22.598712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.339 [2024-12-07 17:29:22.652749] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:49.339 [2024-12-07 17:29:22.652833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.339 [2024-12-07 17:29:22.652847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.339 [2024-12-07 17:29:22.652857] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.339 17:29:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.599 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.599 "name": "raid_bdev1", 00:12:49.599 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:49.599 "strip_size_kb": 0, 00:12:49.599 "state": "online", 00:12:49.599 "raid_level": "raid1", 00:12:49.599 "superblock": true, 00:12:49.599 "num_base_bdevs": 2, 00:12:49.599 "num_base_bdevs_discovered": 1, 00:12:49.599 "num_base_bdevs_operational": 1, 00:12:49.599 "base_bdevs_list": [ 00:12:49.599 { 00:12:49.599 "name": null, 00:12:49.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.599 "is_configured": false, 00:12:49.599 "data_offset": 0, 00:12:49.599 "data_size": 63488 00:12:49.599 }, 00:12:49.599 { 00:12:49.599 "name": "BaseBdev2", 00:12:49.599 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:49.599 "is_configured": true, 00:12:49.599 "data_offset": 2048, 00:12:49.599 "data_size": 63488 00:12:49.599 } 00:12:49.599 ] 00:12:49.599 }' 00:12:49.599 17:29:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.599 17:29:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.859 17:29:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:49.859 17:29:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.859 17:29:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.859 [2024-12-07 17:29:23.167099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:49.859 [2024-12-07 17:29:23.167222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.859 [2024-12-07 17:29:23.167268] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:49.859 [2024-12-07 17:29:23.167314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.859 [2024-12-07 17:29:23.167814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.859 [2024-12-07 17:29:23.167877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:49.859 [2024-12-07 17:29:23.168018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:49.859 [2024-12-07 17:29:23.168066] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:49.859 [2024-12-07 17:29:23.168110] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:49.859 [2024-12-07 17:29:23.168197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.859 [2024-12-07 17:29:23.183875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:49.859 spare 00:12:49.859 17:29:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.859 [2024-12-07 17:29:23.185742] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.859 17:29:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.242 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.242 "name": "raid_bdev1", 00:12:51.242 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:51.243 "strip_size_kb": 0, 00:12:51.243 "state": "online", 00:12:51.243 "raid_level": "raid1", 00:12:51.243 "superblock": true, 00:12:51.243 "num_base_bdevs": 2, 00:12:51.243 "num_base_bdevs_discovered": 2, 00:12:51.243 "num_base_bdevs_operational": 2, 00:12:51.243 "process": { 00:12:51.243 "type": "rebuild", 00:12:51.243 "target": "spare", 00:12:51.243 "progress": { 00:12:51.243 "blocks": 20480, 00:12:51.243 "percent": 32 00:12:51.243 } 00:12:51.243 }, 00:12:51.243 "base_bdevs_list": [ 00:12:51.243 { 00:12:51.243 "name": "spare", 00:12:51.243 "uuid": "335ad3d5-0cf8-50b0-8c28-b3f18f591acd", 00:12:51.243 "is_configured": true, 00:12:51.243 "data_offset": 2048, 00:12:51.243 "data_size": 63488 00:12:51.243 }, 00:12:51.243 { 00:12:51.243 "name": "BaseBdev2", 00:12:51.243 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:51.243 "is_configured": true, 00:12:51.243 "data_offset": 2048, 00:12:51.243 "data_size": 63488 00:12:51.243 } 00:12:51.243 ] 00:12:51.243 }' 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.243 [2024-12-07 17:29:24.345362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.243 [2024-12-07 17:29:24.390838] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:51.243 [2024-12-07 17:29:24.390967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.243 [2024-12-07 17:29:24.391007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.243 [2024-12-07 17:29:24.391044] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.243 "name": "raid_bdev1", 00:12:51.243 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:51.243 "strip_size_kb": 0, 00:12:51.243 "state": "online", 00:12:51.243 "raid_level": "raid1", 00:12:51.243 "superblock": true, 00:12:51.243 "num_base_bdevs": 2, 00:12:51.243 "num_base_bdevs_discovered": 1, 00:12:51.243 "num_base_bdevs_operational": 1, 00:12:51.243 "base_bdevs_list": [ 00:12:51.243 { 00:12:51.243 "name": null, 00:12:51.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.243 "is_configured": false, 00:12:51.243 "data_offset": 0, 00:12:51.243 "data_size": 63488 00:12:51.243 }, 00:12:51.243 { 00:12:51.243 "name": "BaseBdev2", 00:12:51.243 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:51.243 "is_configured": true, 00:12:51.243 "data_offset": 2048, 00:12:51.243 "data_size": 63488 00:12:51.243 } 00:12:51.243 ] 00:12:51.243 }' 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.243 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.815 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.815 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.815 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.815 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.815 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.816 "name": "raid_bdev1", 00:12:51.816 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:51.816 "strip_size_kb": 0, 00:12:51.816 "state": "online", 00:12:51.816 "raid_level": "raid1", 00:12:51.816 "superblock": true, 00:12:51.816 "num_base_bdevs": 2, 00:12:51.816 "num_base_bdevs_discovered": 1, 00:12:51.816 "num_base_bdevs_operational": 1, 00:12:51.816 "base_bdevs_list": [ 00:12:51.816 { 00:12:51.816 "name": null, 00:12:51.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.816 "is_configured": false, 00:12:51.816 "data_offset": 0, 00:12:51.816 "data_size": 63488 00:12:51.816 }, 00:12:51.816 { 00:12:51.816 "name": "BaseBdev2", 00:12:51.816 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:51.816 "is_configured": true, 00:12:51.816 "data_offset": 2048, 00:12:51.816 "data_size": 63488 00:12:51.816 } 00:12:51.816 ] 00:12:51.816 }' 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.816 17:29:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.816 17:29:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.816 17:29:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:51.816 17:29:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.816 17:29:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.816 [2024-12-07 17:29:25.005022] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:51.816 [2024-12-07 17:29:25.005121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.816 [2024-12-07 17:29:25.005156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:51.816 [2024-12-07 17:29:25.005175] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.816 [2024-12-07 17:29:25.005645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.816 [2024-12-07 17:29:25.005663] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:51.816 [2024-12-07 17:29:25.005752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:51.816 [2024-12-07 17:29:25.005782] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:51.816 [2024-12-07 17:29:25.005792] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:51.816 [2024-12-07 17:29:25.005803] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:51.816 BaseBdev1 00:12:51.816 17:29:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.816 17:29:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.755 "name": "raid_bdev1", 00:12:52.755 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:52.755 "strip_size_kb": 0, 00:12:52.755 "state": "online", 00:12:52.755 "raid_level": "raid1", 00:12:52.755 "superblock": true, 00:12:52.755 "num_base_bdevs": 2, 00:12:52.755 "num_base_bdevs_discovered": 1, 00:12:52.755 "num_base_bdevs_operational": 1, 00:12:52.755 "base_bdevs_list": [ 00:12:52.755 { 00:12:52.755 "name": null, 00:12:52.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.755 "is_configured": false, 00:12:52.755 "data_offset": 0, 00:12:52.755 "data_size": 63488 00:12:52.755 }, 00:12:52.755 { 00:12:52.755 "name": "BaseBdev2", 00:12:52.755 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:52.755 "is_configured": true, 00:12:52.755 "data_offset": 2048, 00:12:52.755 "data_size": 63488 00:12:52.755 } 00:12:52.755 ] 00:12:52.755 }' 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.755 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.323 "name": "raid_bdev1", 00:12:53.323 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:53.323 "strip_size_kb": 0, 00:12:53.323 "state": "online", 00:12:53.323 "raid_level": "raid1", 00:12:53.323 "superblock": true, 00:12:53.323 "num_base_bdevs": 2, 00:12:53.323 "num_base_bdevs_discovered": 1, 00:12:53.323 "num_base_bdevs_operational": 1, 00:12:53.323 "base_bdevs_list": [ 00:12:53.323 { 00:12:53.323 "name": null, 00:12:53.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.323 "is_configured": false, 00:12:53.323 "data_offset": 0, 00:12:53.323 "data_size": 63488 00:12:53.323 }, 00:12:53.323 { 00:12:53.323 "name": "BaseBdev2", 00:12:53.323 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:53.323 "is_configured": true, 00:12:53.323 "data_offset": 2048, 00:12:53.323 "data_size": 63488 00:12:53.323 } 00:12:53.323 ] 00:12:53.323 }' 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.323 [2024-12-07 17:29:26.590389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.323 [2024-12-07 17:29:26.590600] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:53.323 [2024-12-07 17:29:26.590661] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:53.323 request: 00:12:53.323 { 00:12:53.323 "base_bdev": "BaseBdev1", 00:12:53.323 "raid_bdev": "raid_bdev1", 00:12:53.323 "method": "bdev_raid_add_base_bdev", 00:12:53.323 "req_id": 1 00:12:53.323 } 00:12:53.323 Got JSON-RPC error response 00:12:53.323 response: 00:12:53.323 { 00:12:53.323 "code": -22, 00:12:53.323 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:53.323 } 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:53.323 17:29:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.260 17:29:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.518 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.518 "name": "raid_bdev1", 00:12:54.518 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:54.518 "strip_size_kb": 0, 00:12:54.518 "state": "online", 00:12:54.518 "raid_level": "raid1", 00:12:54.518 "superblock": true, 00:12:54.518 "num_base_bdevs": 2, 00:12:54.518 "num_base_bdevs_discovered": 1, 00:12:54.518 "num_base_bdevs_operational": 1, 00:12:54.518 "base_bdevs_list": [ 00:12:54.519 { 00:12:54.519 "name": null, 00:12:54.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.519 "is_configured": false, 00:12:54.519 "data_offset": 0, 00:12:54.519 "data_size": 63488 00:12:54.519 }, 00:12:54.519 { 00:12:54.519 "name": "BaseBdev2", 00:12:54.519 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:54.519 "is_configured": true, 00:12:54.519 "data_offset": 2048, 00:12:54.519 "data_size": 63488 00:12:54.519 } 00:12:54.519 ] 00:12:54.519 }' 00:12:54.519 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.519 17:29:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.777 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:54.777 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.777 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:54.777 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:54.777 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.777 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.777 17:29:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.777 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.777 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.777 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.777 17:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.777 "name": "raid_bdev1", 00:12:54.777 "uuid": "5f972365-3d8e-463c-ac5d-536da43c1e83", 00:12:54.777 "strip_size_kb": 0, 00:12:54.777 "state": "online", 00:12:54.777 "raid_level": "raid1", 00:12:54.777 "superblock": true, 00:12:54.777 "num_base_bdevs": 2, 00:12:54.777 "num_base_bdevs_discovered": 1, 00:12:54.777 "num_base_bdevs_operational": 1, 00:12:54.777 "base_bdevs_list": [ 00:12:54.777 { 00:12:54.777 "name": null, 00:12:54.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.777 "is_configured": false, 00:12:54.777 "data_offset": 0, 00:12:54.777 "data_size": 63488 00:12:54.777 }, 00:12:54.777 { 00:12:54.777 "name": "BaseBdev2", 00:12:54.777 "uuid": "bda26211-2be1-5405-96ce-e387e278df83", 00:12:54.777 "is_configured": true, 00:12:54.777 "data_offset": 2048, 00:12:54.778 "data_size": 63488 00:12:54.778 } 00:12:54.778 ] 00:12:54.778 }' 00:12:54.778 17:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.778 17:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:54.778 17:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.778 17:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:54.778 17:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75745 00:12:54.778 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75745 ']' 00:12:54.778 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75745 00:12:54.778 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:54.778 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.778 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75745 00:12:55.036 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.036 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.036 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75745' 00:12:55.036 killing process with pid 75745 00:12:55.036 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75745 00:12:55.036 Received shutdown signal, test time was about 60.000000 seconds 00:12:55.036 00:12:55.037 Latency(us) 00:12:55.037 [2024-12-07T17:29:28.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.037 [2024-12-07T17:29:28.419Z] =================================================================================================================== 00:12:55.037 [2024-12-07T17:29:28.419Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:55.037 [2024-12-07 17:29:28.169020] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.037 [2024-12-07 17:29:28.169158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.037 [2024-12-07 17:29:28.169209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.037 17:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75745 00:12:55.037 [2024-12-07 17:29:28.169221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:55.295 [2024-12-07 17:29:28.467420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.230 17:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:56.230 00:12:56.230 real 0m23.604s 00:12:56.230 user 0m28.470s 00:12:56.230 sys 0m3.922s 00:12:56.230 17:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.230 ************************************ 00:12:56.230 END TEST raid_rebuild_test_sb 00:12:56.230 ************************************ 00:12:56.230 17:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.489 17:29:29 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:56.489 17:29:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:56.489 17:29:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.489 17:29:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.489 ************************************ 00:12:56.489 START TEST raid_rebuild_test_io 00:12:56.489 ************************************ 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76482 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76482 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76482 ']' 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.489 17:29:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.489 [2024-12-07 17:29:29.776330] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:56.489 [2024-12-07 17:29:29.776551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:56.489 Zero copy mechanism will not be used. 00:12:56.489 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76482 ] 00:12:56.748 [2024-12-07 17:29:29.955304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.748 [2024-12-07 17:29:30.069617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.008 [2024-12-07 17:29:30.277109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.008 [2024-12-07 17:29:30.277231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.269 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.269 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:57.269 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:57.269 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:57.269 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.269 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.269 BaseBdev1_malloc 00:12:57.269 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.269 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:57.269 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.269 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.530 [2024-12-07 17:29:30.652490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:57.530 [2024-12-07 17:29:30.652591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.530 [2024-12-07 17:29:30.652631] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:57.530 [2024-12-07 17:29:30.652662] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.530 [2024-12-07 17:29:30.654785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.530 [2024-12-07 17:29:30.654876] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:57.530 BaseBdev1 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.530 BaseBdev2_malloc 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.530 [2024-12-07 17:29:30.708080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:57.530 [2024-12-07 17:29:30.708204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.530 [2024-12-07 17:29:30.708235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:57.530 [2024-12-07 17:29:30.708249] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.530 [2024-12-07 17:29:30.710565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.530 [2024-12-07 17:29:30.710605] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:57.530 BaseBdev2 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.530 spare_malloc 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.530 spare_delay 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.530 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.530 [2024-12-07 17:29:30.785940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:57.530 [2024-12-07 17:29:30.786048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.530 [2024-12-07 17:29:30.786087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:57.530 [2024-12-07 17:29:30.786119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.530 [2024-12-07 17:29:30.788579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.530 [2024-12-07 17:29:30.788685] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:57.530 spare 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.531 [2024-12-07 17:29:30.797975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:57.531 [2024-12-07 17:29:30.799811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:57.531 [2024-12-07 17:29:30.799966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:57.531 [2024-12-07 17:29:30.799985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:57.531 [2024-12-07 17:29:30.800232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:57.531 [2024-12-07 17:29:30.800392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:57.531 [2024-12-07 17:29:30.800404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:57.531 [2024-12-07 17:29:30.800560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.531 "name": "raid_bdev1", 00:12:57.531 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:12:57.531 "strip_size_kb": 0, 00:12:57.531 "state": "online", 00:12:57.531 "raid_level": "raid1", 00:12:57.531 "superblock": false, 00:12:57.531 "num_base_bdevs": 2, 00:12:57.531 "num_base_bdevs_discovered": 2, 00:12:57.531 "num_base_bdevs_operational": 2, 00:12:57.531 "base_bdevs_list": [ 00:12:57.531 { 00:12:57.531 "name": "BaseBdev1", 00:12:57.531 "uuid": "09ce6300-470e-5d73-9b64-4650f3d4a69d", 00:12:57.531 "is_configured": true, 00:12:57.531 "data_offset": 0, 00:12:57.531 "data_size": 65536 00:12:57.531 }, 00:12:57.531 { 00:12:57.531 "name": "BaseBdev2", 00:12:57.531 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:12:57.531 "is_configured": true, 00:12:57.531 "data_offset": 0, 00:12:57.531 "data_size": 65536 00:12:57.531 } 00:12:57.531 ] 00:12:57.531 }' 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.531 17:29:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.105 [2024-12-07 17:29:31.269441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.105 [2024-12-07 17:29:31.368984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.105 "name": "raid_bdev1", 00:12:58.105 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:12:58.105 "strip_size_kb": 0, 00:12:58.105 "state": "online", 00:12:58.105 "raid_level": "raid1", 00:12:58.105 "superblock": false, 00:12:58.105 "num_base_bdevs": 2, 00:12:58.105 "num_base_bdevs_discovered": 1, 00:12:58.105 "num_base_bdevs_operational": 1, 00:12:58.105 "base_bdevs_list": [ 00:12:58.105 { 00:12:58.105 "name": null, 00:12:58.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.105 "is_configured": false, 00:12:58.105 "data_offset": 0, 00:12:58.105 "data_size": 65536 00:12:58.105 }, 00:12:58.105 { 00:12:58.105 "name": "BaseBdev2", 00:12:58.105 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:12:58.105 "is_configured": true, 00:12:58.105 "data_offset": 0, 00:12:58.105 "data_size": 65536 00:12:58.105 } 00:12:58.105 ] 00:12:58.105 }' 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.105 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.105 [2024-12-07 17:29:31.465171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:58.105 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:58.105 Zero copy mechanism will not be used. 00:12:58.105 Running I/O for 60 seconds... 00:12:58.680 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:58.680 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.680 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.680 [2024-12-07 17:29:31.811569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.680 17:29:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.680 17:29:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:58.680 [2024-12-07 17:29:31.868519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:58.680 [2024-12-07 17:29:31.870403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:58.680 [2024-12-07 17:29:31.983536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:58.680 [2024-12-07 17:29:31.984167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:58.948 [2024-12-07 17:29:32.093442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:58.948 [2024-12-07 17:29:32.093791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:59.208 [2024-12-07 17:29:32.439185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:59.208 188.00 IOPS, 564.00 MiB/s [2024-12-07T17:29:32.590Z] [2024-12-07 17:29:32.562997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.778 "name": "raid_bdev1", 00:12:59.778 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:12:59.778 "strip_size_kb": 0, 00:12:59.778 "state": "online", 00:12:59.778 "raid_level": "raid1", 00:12:59.778 "superblock": false, 00:12:59.778 "num_base_bdevs": 2, 00:12:59.778 "num_base_bdevs_discovered": 2, 00:12:59.778 "num_base_bdevs_operational": 2, 00:12:59.778 "process": { 00:12:59.778 "type": "rebuild", 00:12:59.778 "target": "spare", 00:12:59.778 "progress": { 00:12:59.778 "blocks": 12288, 00:12:59.778 "percent": 18 00:12:59.778 } 00:12:59.778 }, 00:12:59.778 "base_bdevs_list": [ 00:12:59.778 { 00:12:59.778 "name": "spare", 00:12:59.778 "uuid": "c5a3764a-409c-5daf-bf88-6f493b5f948b", 00:12:59.778 "is_configured": true, 00:12:59.778 "data_offset": 0, 00:12:59.778 "data_size": 65536 00:12:59.778 }, 00:12:59.778 { 00:12:59.778 "name": "BaseBdev2", 00:12:59.778 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:12:59.778 "is_configured": true, 00:12:59.778 "data_offset": 0, 00:12:59.778 "data_size": 65536 00:12:59.778 } 00:12:59.778 ] 00:12:59.778 }' 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.778 [2024-12-07 17:29:32.911851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.778 17:29:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.778 [2024-12-07 17:29:32.991095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.778 [2024-12-07 17:29:33.078940] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:59.778 [2024-12-07 17:29:33.086944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.778 [2024-12-07 17:29:33.087029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.778 [2024-12-07 17:29:33.087056] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:59.778 [2024-12-07 17:29:33.121639] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.778 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.038 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.038 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.038 "name": "raid_bdev1", 00:13:00.038 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:13:00.038 "strip_size_kb": 0, 00:13:00.038 "state": "online", 00:13:00.038 "raid_level": "raid1", 00:13:00.038 "superblock": false, 00:13:00.038 "num_base_bdevs": 2, 00:13:00.038 "num_base_bdevs_discovered": 1, 00:13:00.038 "num_base_bdevs_operational": 1, 00:13:00.038 "base_bdevs_list": [ 00:13:00.038 { 00:13:00.038 "name": null, 00:13:00.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.038 "is_configured": false, 00:13:00.038 "data_offset": 0, 00:13:00.038 "data_size": 65536 00:13:00.038 }, 00:13:00.038 { 00:13:00.038 "name": "BaseBdev2", 00:13:00.038 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:13:00.038 "is_configured": true, 00:13:00.038 "data_offset": 0, 00:13:00.038 "data_size": 65536 00:13:00.038 } 00:13:00.038 ] 00:13:00.038 }' 00:13:00.038 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.038 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.299 173.50 IOPS, 520.50 MiB/s [2024-12-07T17:29:33.681Z] 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.299 "name": "raid_bdev1", 00:13:00.299 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:13:00.299 "strip_size_kb": 0, 00:13:00.299 "state": "online", 00:13:00.299 "raid_level": "raid1", 00:13:00.299 "superblock": false, 00:13:00.299 "num_base_bdevs": 2, 00:13:00.299 "num_base_bdevs_discovered": 1, 00:13:00.299 "num_base_bdevs_operational": 1, 00:13:00.299 "base_bdevs_list": [ 00:13:00.299 { 00:13:00.299 "name": null, 00:13:00.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.299 "is_configured": false, 00:13:00.299 "data_offset": 0, 00:13:00.299 "data_size": 65536 00:13:00.299 }, 00:13:00.299 { 00:13:00.299 "name": "BaseBdev2", 00:13:00.299 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:13:00.299 "is_configured": true, 00:13:00.299 "data_offset": 0, 00:13:00.299 "data_size": 65536 00:13:00.299 } 00:13:00.299 ] 00:13:00.299 }' 00:13:00.299 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.559 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.559 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.559 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.559 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.559 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.559 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.559 [2024-12-07 17:29:33.736634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.559 17:29:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.559 17:29:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:00.559 [2024-12-07 17:29:33.802150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:00.559 [2024-12-07 17:29:33.803985] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:00.559 [2024-12-07 17:29:33.911188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:00.559 [2024-12-07 17:29:33.911806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:00.819 [2024-12-07 17:29:34.130836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.819 [2024-12-07 17:29:34.131192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:01.079 [2024-12-07 17:29:34.447754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:01.079 [2024-12-07 17:29:34.448217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:01.339 155.67 IOPS, 467.00 MiB/s [2024-12-07T17:29:34.722Z] [2024-12-07 17:29:34.662393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:01.340 [2024-12-07 17:29:34.662703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.600 "name": "raid_bdev1", 00:13:01.600 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:13:01.600 "strip_size_kb": 0, 00:13:01.600 "state": "online", 00:13:01.600 "raid_level": "raid1", 00:13:01.600 "superblock": false, 00:13:01.600 "num_base_bdevs": 2, 00:13:01.600 "num_base_bdevs_discovered": 2, 00:13:01.600 "num_base_bdevs_operational": 2, 00:13:01.600 "process": { 00:13:01.600 "type": "rebuild", 00:13:01.600 "target": "spare", 00:13:01.600 "progress": { 00:13:01.600 "blocks": 10240, 00:13:01.600 "percent": 15 00:13:01.600 } 00:13:01.600 }, 00:13:01.600 "base_bdevs_list": [ 00:13:01.600 { 00:13:01.600 "name": "spare", 00:13:01.600 "uuid": "c5a3764a-409c-5daf-bf88-6f493b5f948b", 00:13:01.600 "is_configured": true, 00:13:01.600 "data_offset": 0, 00:13:01.600 "data_size": 65536 00:13:01.600 }, 00:13:01.600 { 00:13:01.600 "name": "BaseBdev2", 00:13:01.600 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:13:01.600 "is_configured": true, 00:13:01.600 "data_offset": 0, 00:13:01.600 "data_size": 65536 00:13:01.600 } 00:13:01.600 ] 00:13:01.600 }' 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=405 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.600 "name": "raid_bdev1", 00:13:01.600 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:13:01.600 "strip_size_kb": 0, 00:13:01.600 "state": "online", 00:13:01.600 "raid_level": "raid1", 00:13:01.600 "superblock": false, 00:13:01.600 "num_base_bdevs": 2, 00:13:01.600 "num_base_bdevs_discovered": 2, 00:13:01.600 "num_base_bdevs_operational": 2, 00:13:01.600 "process": { 00:13:01.600 "type": "rebuild", 00:13:01.600 "target": "spare", 00:13:01.600 "progress": { 00:13:01.600 "blocks": 12288, 00:13:01.600 "percent": 18 00:13:01.600 } 00:13:01.600 }, 00:13:01.600 "base_bdevs_list": [ 00:13:01.600 { 00:13:01.600 "name": "spare", 00:13:01.600 "uuid": "c5a3764a-409c-5daf-bf88-6f493b5f948b", 00:13:01.600 "is_configured": true, 00:13:01.600 "data_offset": 0, 00:13:01.600 "data_size": 65536 00:13:01.600 }, 00:13:01.600 { 00:13:01.600 "name": "BaseBdev2", 00:13:01.600 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:13:01.600 "is_configured": true, 00:13:01.600 "data_offset": 0, 00:13:01.600 "data_size": 65536 00:13:01.600 } 00:13:01.600 ] 00:13:01.600 }' 00:13:01.600 17:29:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.860 17:29:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.860 17:29:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.860 17:29:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.860 17:29:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:01.860 [2024-12-07 17:29:35.110031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:02.117 [2024-12-07 17:29:35.332922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:02.375 139.00 IOPS, 417.00 MiB/s [2024-12-07T17:29:35.757Z] [2024-12-07 17:29:35.535353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.941 "name": "raid_bdev1", 00:13:02.941 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:13:02.941 "strip_size_kb": 0, 00:13:02.941 "state": "online", 00:13:02.941 "raid_level": "raid1", 00:13:02.941 "superblock": false, 00:13:02.941 "num_base_bdevs": 2, 00:13:02.941 "num_base_bdevs_discovered": 2, 00:13:02.941 "num_base_bdevs_operational": 2, 00:13:02.941 "process": { 00:13:02.941 "type": "rebuild", 00:13:02.941 "target": "spare", 00:13:02.941 "progress": { 00:13:02.941 "blocks": 30720, 00:13:02.941 "percent": 46 00:13:02.941 } 00:13:02.941 }, 00:13:02.941 "base_bdevs_list": [ 00:13:02.941 { 00:13:02.941 "name": "spare", 00:13:02.941 "uuid": "c5a3764a-409c-5daf-bf88-6f493b5f948b", 00:13:02.941 "is_configured": true, 00:13:02.941 "data_offset": 0, 00:13:02.941 "data_size": 65536 00:13:02.941 }, 00:13:02.941 { 00:13:02.941 "name": "BaseBdev2", 00:13:02.941 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:13:02.941 "is_configured": true, 00:13:02.941 "data_offset": 0, 00:13:02.941 "data_size": 65536 00:13:02.941 } 00:13:02.941 ] 00:13:02.941 }' 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.941 17:29:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:03.459 121.00 IOPS, 363.00 MiB/s [2024-12-07T17:29:36.841Z] [2024-12-07 17:29:36.796330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:03.717 [2024-12-07 17:29:36.903907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:03.977 [2024-12-07 17:29:37.126252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.977 [2024-12-07 17:29:37.241366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.977 "name": "raid_bdev1", 00:13:03.977 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:13:03.977 "strip_size_kb": 0, 00:13:03.977 "state": "online", 00:13:03.977 "raid_level": "raid1", 00:13:03.977 "superblock": false, 00:13:03.977 "num_base_bdevs": 2, 00:13:03.977 "num_base_bdevs_discovered": 2, 00:13:03.977 "num_base_bdevs_operational": 2, 00:13:03.977 "process": { 00:13:03.977 "type": "rebuild", 00:13:03.977 "target": "spare", 00:13:03.977 "progress": { 00:13:03.977 "blocks": 51200, 00:13:03.977 "percent": 78 00:13:03.977 } 00:13:03.977 }, 00:13:03.977 "base_bdevs_list": [ 00:13:03.977 { 00:13:03.977 "name": "spare", 00:13:03.977 "uuid": "c5a3764a-409c-5daf-bf88-6f493b5f948b", 00:13:03.977 "is_configured": true, 00:13:03.977 "data_offset": 0, 00:13:03.977 "data_size": 65536 00:13:03.977 }, 00:13:03.977 { 00:13:03.977 "name": "BaseBdev2", 00:13:03.977 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:13:03.977 "is_configured": true, 00:13:03.977 "data_offset": 0, 00:13:03.977 "data_size": 65536 00:13:03.977 } 00:13:03.977 ] 00:13:03.977 }' 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.977 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.236 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.236 17:29:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.236 [2024-12-07 17:29:37.457110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:04.236 [2024-12-07 17:29:37.457626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:04.804 107.83 IOPS, 323.50 MiB/s [2024-12-07T17:29:38.186Z] [2024-12-07 17:29:37.904007] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:04.804 [2024-12-07 17:29:38.003884] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:04.804 [2024-12-07 17:29:38.006078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.063 "name": "raid_bdev1", 00:13:05.063 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:13:05.063 "strip_size_kb": 0, 00:13:05.063 "state": "online", 00:13:05.063 "raid_level": "raid1", 00:13:05.063 "superblock": false, 00:13:05.063 "num_base_bdevs": 2, 00:13:05.063 "num_base_bdevs_discovered": 2, 00:13:05.063 "num_base_bdevs_operational": 2, 00:13:05.063 "base_bdevs_list": [ 00:13:05.063 { 00:13:05.063 "name": "spare", 00:13:05.063 "uuid": "c5a3764a-409c-5daf-bf88-6f493b5f948b", 00:13:05.063 "is_configured": true, 00:13:05.063 "data_offset": 0, 00:13:05.063 "data_size": 65536 00:13:05.063 }, 00:13:05.063 { 00:13:05.063 "name": "BaseBdev2", 00:13:05.063 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:13:05.063 "is_configured": true, 00:13:05.063 "data_offset": 0, 00:13:05.063 "data_size": 65536 00:13:05.063 } 00:13:05.063 ] 00:13:05.063 }' 00:13:05.063 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.322 96.86 IOPS, 290.57 MiB/s [2024-12-07T17:29:38.704Z] 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.322 "name": "raid_bdev1", 00:13:05.322 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:13:05.322 "strip_size_kb": 0, 00:13:05.322 "state": "online", 00:13:05.322 "raid_level": "raid1", 00:13:05.322 "superblock": false, 00:13:05.322 "num_base_bdevs": 2, 00:13:05.322 "num_base_bdevs_discovered": 2, 00:13:05.322 "num_base_bdevs_operational": 2, 00:13:05.322 "base_bdevs_list": [ 00:13:05.322 { 00:13:05.322 "name": "spare", 00:13:05.322 "uuid": "c5a3764a-409c-5daf-bf88-6f493b5f948b", 00:13:05.322 "is_configured": true, 00:13:05.322 "data_offset": 0, 00:13:05.322 "data_size": 65536 00:13:05.322 }, 00:13:05.322 { 00:13:05.322 "name": "BaseBdev2", 00:13:05.322 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:13:05.322 "is_configured": true, 00:13:05.322 "data_offset": 0, 00:13:05.322 "data_size": 65536 00:13:05.322 } 00:13:05.322 ] 00:13:05.322 }' 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.322 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.322 "name": "raid_bdev1", 00:13:05.322 "uuid": "b467003f-dd01-46db-9188-f018af379df9", 00:13:05.322 "strip_size_kb": 0, 00:13:05.322 "state": "online", 00:13:05.322 "raid_level": "raid1", 00:13:05.322 "superblock": false, 00:13:05.322 "num_base_bdevs": 2, 00:13:05.322 "num_base_bdevs_discovered": 2, 00:13:05.323 "num_base_bdevs_operational": 2, 00:13:05.323 "base_bdevs_list": [ 00:13:05.323 { 00:13:05.323 "name": "spare", 00:13:05.323 "uuid": "c5a3764a-409c-5daf-bf88-6f493b5f948b", 00:13:05.323 "is_configured": true, 00:13:05.323 "data_offset": 0, 00:13:05.323 "data_size": 65536 00:13:05.323 }, 00:13:05.323 { 00:13:05.323 "name": "BaseBdev2", 00:13:05.323 "uuid": "57b8c1f6-37dc-5896-907b-3c69dd1fb90f", 00:13:05.323 "is_configured": true, 00:13:05.323 "data_offset": 0, 00:13:05.323 "data_size": 65536 00:13:05.323 } 00:13:05.323 ] 00:13:05.323 }' 00:13:05.323 17:29:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.323 17:29:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.890 [2024-12-07 17:29:39.019847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.890 [2024-12-07 17:29:39.019920] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.890 00:13:05.890 Latency(us) 00:13:05.890 [2024-12-07T17:29:39.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.890 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:05.890 raid_bdev1 : 7.66 92.78 278.35 0.00 0.00 14726.46 311.22 108978.64 00:13:05.890 [2024-12-07T17:29:39.272Z] =================================================================================================================== 00:13:05.890 [2024-12-07T17:29:39.272Z] Total : 92.78 278.35 0.00 0.00 14726.46 311.22 108978.64 00:13:05.890 [2024-12-07 17:29:39.137073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.890 [2024-12-07 17:29:39.137199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.890 [2024-12-07 17:29:39.137295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.890 [2024-12-07 17:29:39.137367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:05.890 { 00:13:05.890 "results": [ 00:13:05.890 { 00:13:05.890 "job": "raid_bdev1", 00:13:05.890 "core_mask": "0x1", 00:13:05.890 "workload": "randrw", 00:13:05.890 "percentage": 50, 00:13:05.890 "status": "finished", 00:13:05.890 "queue_depth": 2, 00:13:05.890 "io_size": 3145728, 00:13:05.890 "runtime": 7.663127, 00:13:05.890 "iops": 92.7819674657617, 00:13:05.890 "mibps": 278.3459023972851, 00:13:05.890 "io_failed": 0, 00:13:05.890 "io_timeout": 0, 00:13:05.890 "avg_latency_us": 14726.459472174623, 00:13:05.890 "min_latency_us": 311.22445414847164, 00:13:05.890 "max_latency_us": 108978.64104803493 00:13:05.890 } 00:13:05.890 ], 00:13:05.890 "core_count": 1 00:13:05.890 } 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.890 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:06.149 /dev/nbd0 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.150 1+0 records in 00:13:06.150 1+0 records out 00:13:06.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431239 s, 9.5 MB/s 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.150 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:06.408 /dev/nbd1 00:13:06.408 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.408 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.408 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:06.408 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:06.408 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.408 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.408 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:06.408 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.409 1+0 records in 00:13:06.409 1+0 records out 00:13:06.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568314 s, 7.2 MB/s 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.409 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:06.667 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:06.667 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.667 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:06.667 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.667 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:06.667 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.667 17:29:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76482 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76482 ']' 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76482 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:06.926 17:29:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.184 17:29:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76482 00:13:07.184 17:29:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.184 17:29:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.184 17:29:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76482' 00:13:07.184 killing process with pid 76482 00:13:07.184 17:29:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76482 00:13:07.184 Received shutdown signal, test time was about 8.891465 seconds 00:13:07.184 00:13:07.184 Latency(us) 00:13:07.184 [2024-12-07T17:29:40.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.184 [2024-12-07T17:29:40.566Z] =================================================================================================================== 00:13:07.184 [2024-12-07T17:29:40.566Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.184 [2024-12-07 17:29:40.341631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.184 17:29:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76482 00:13:07.443 [2024-12-07 17:29:40.571083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.379 17:29:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:08.379 00:13:08.379 real 0m12.070s 00:13:08.379 user 0m15.126s 00:13:08.379 sys 0m1.526s 00:13:08.379 17:29:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.379 17:29:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.379 ************************************ 00:13:08.379 END TEST raid_rebuild_test_io 00:13:08.379 ************************************ 00:13:08.639 17:29:41 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:08.639 17:29:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:08.639 17:29:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.639 17:29:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.639 ************************************ 00:13:08.639 START TEST raid_rebuild_test_sb_io 00:13:08.639 ************************************ 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76858 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76858 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76858 ']' 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.639 17:29:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.639 [2024-12-07 17:29:41.921046] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:08.639 [2024-12-07 17:29:41.921678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:08.639 Zero copy mechanism will not be used. 00:13:08.639 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76858 ] 00:13:08.898 [2024-12-07 17:29:42.100161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.898 [2024-12-07 17:29:42.209312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.157 [2024-12-07 17:29:42.402847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.157 [2024-12-07 17:29:42.402982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.416 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.416 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:09.416 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.416 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.416 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.416 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.674 BaseBdev1_malloc 00:13:09.674 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.674 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:09.674 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.674 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.674 [2024-12-07 17:29:42.818716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:09.674 [2024-12-07 17:29:42.818833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.675 [2024-12-07 17:29:42.818873] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:09.675 [2024-12-07 17:29:42.818903] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.675 [2024-12-07 17:29:42.821013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.675 [2024-12-07 17:29:42.821087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.675 BaseBdev1 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 BaseBdev2_malloc 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 [2024-12-07 17:29:42.872080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:09.675 [2024-12-07 17:29:42.872199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.675 [2024-12-07 17:29:42.872242] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:09.675 [2024-12-07 17:29:42.872283] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.675 [2024-12-07 17:29:42.874515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.675 [2024-12-07 17:29:42.874590] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.675 BaseBdev2 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 spare_malloc 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 spare_delay 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 [2024-12-07 17:29:42.949480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.675 [2024-12-07 17:29:42.949613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.675 [2024-12-07 17:29:42.949651] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:09.675 [2024-12-07 17:29:42.949681] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.675 [2024-12-07 17:29:42.951708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.675 [2024-12-07 17:29:42.951783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.675 spare 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 [2024-12-07 17:29:42.961522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.675 [2024-12-07 17:29:42.963387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.675 [2024-12-07 17:29:42.963571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:09.675 [2024-12-07 17:29:42.963587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.675 [2024-12-07 17:29:42.963811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:09.675 [2024-12-07 17:29:42.963986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:09.675 [2024-12-07 17:29:42.963997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:09.675 [2024-12-07 17:29:42.964162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 17:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.675 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.675 "name": "raid_bdev1", 00:13:09.675 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:09.675 "strip_size_kb": 0, 00:13:09.675 "state": "online", 00:13:09.675 "raid_level": "raid1", 00:13:09.675 "superblock": true, 00:13:09.675 "num_base_bdevs": 2, 00:13:09.675 "num_base_bdevs_discovered": 2, 00:13:09.675 "num_base_bdevs_operational": 2, 00:13:09.675 "base_bdevs_list": [ 00:13:09.675 { 00:13:09.675 "name": "BaseBdev1", 00:13:09.675 "uuid": "b268fd55-0ca4-52b2-b5f8-6543ce321a3d", 00:13:09.675 "is_configured": true, 00:13:09.675 "data_offset": 2048, 00:13:09.675 "data_size": 63488 00:13:09.675 }, 00:13:09.675 { 00:13:09.675 "name": "BaseBdev2", 00:13:09.675 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:09.675 "is_configured": true, 00:13:09.675 "data_offset": 2048, 00:13:09.675 "data_size": 63488 00:13:09.675 } 00:13:09.675 ] 00:13:09.675 }' 00:13:09.675 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.675 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:10.243 [2024-12-07 17:29:43.417038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:10.243 [2024-12-07 17:29:43.512583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.243 "name": "raid_bdev1", 00:13:10.243 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:10.243 "strip_size_kb": 0, 00:13:10.243 "state": "online", 00:13:10.243 "raid_level": "raid1", 00:13:10.243 "superblock": true, 00:13:10.243 "num_base_bdevs": 2, 00:13:10.243 "num_base_bdevs_discovered": 1, 00:13:10.243 "num_base_bdevs_operational": 1, 00:13:10.243 "base_bdevs_list": [ 00:13:10.243 { 00:13:10.243 "name": null, 00:13:10.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.243 "is_configured": false, 00:13:10.243 "data_offset": 0, 00:13:10.243 "data_size": 63488 00:13:10.243 }, 00:13:10.243 { 00:13:10.243 "name": "BaseBdev2", 00:13:10.243 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:10.243 "is_configured": true, 00:13:10.243 "data_offset": 2048, 00:13:10.243 "data_size": 63488 00:13:10.243 } 00:13:10.243 ] 00:13:10.243 }' 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.243 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.243 [2024-12-07 17:29:43.611376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:10.243 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:10.243 Zero copy mechanism will not be used. 00:13:10.243 Running I/O for 60 seconds... 00:13:10.901 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:10.901 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.901 17:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.901 [2024-12-07 17:29:43.966486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.901 17:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.901 17:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:10.901 [2024-12-07 17:29:44.022557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:10.901 [2024-12-07 17:29:44.024636] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.901 [2024-12-07 17:29:44.146992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:11.159 [2024-12-07 17:29:44.377851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:11.159 [2024-12-07 17:29:44.378290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:11.417 209.00 IOPS, 627.00 MiB/s [2024-12-07T17:29:44.799Z] [2024-12-07 17:29:44.714788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:11.675 [2024-12-07 17:29:44.933802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:11.675 [2024-12-07 17:29:44.934249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:11.675 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.675 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.675 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.675 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.675 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.675 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.675 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.675 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.675 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.675 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.933 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.933 "name": "raid_bdev1", 00:13:11.933 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:11.933 "strip_size_kb": 0, 00:13:11.933 "state": "online", 00:13:11.933 "raid_level": "raid1", 00:13:11.933 "superblock": true, 00:13:11.933 "num_base_bdevs": 2, 00:13:11.933 "num_base_bdevs_discovered": 2, 00:13:11.933 "num_base_bdevs_operational": 2, 00:13:11.933 "process": { 00:13:11.933 "type": "rebuild", 00:13:11.933 "target": "spare", 00:13:11.933 "progress": { 00:13:11.933 "blocks": 10240, 00:13:11.933 "percent": 16 00:13:11.933 } 00:13:11.933 }, 00:13:11.933 "base_bdevs_list": [ 00:13:11.933 { 00:13:11.933 "name": "spare", 00:13:11.933 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:11.933 "is_configured": true, 00:13:11.933 "data_offset": 2048, 00:13:11.933 "data_size": 63488 00:13:11.933 }, 00:13:11.933 { 00:13:11.933 "name": "BaseBdev2", 00:13:11.933 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:11.933 "is_configured": true, 00:13:11.933 "data_offset": 2048, 00:13:11.933 "data_size": 63488 00:13:11.933 } 00:13:11.933 ] 00:13:11.933 }' 00:13:11.933 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.933 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.933 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.933 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.933 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.933 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.933 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.933 [2024-12-07 17:29:45.139090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.933 [2024-12-07 17:29:45.175165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:11.933 [2024-12-07 17:29:45.175703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:11.933 [2024-12-07 17:29:45.276540] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:11.933 [2024-12-07 17:29:45.284624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.933 [2024-12-07 17:29:45.284705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.933 [2024-12-07 17:29:45.284737] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:12.191 [2024-12-07 17:29:45.326283] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.191 "name": "raid_bdev1", 00:13:12.191 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:12.191 "strip_size_kb": 0, 00:13:12.191 "state": "online", 00:13:12.191 "raid_level": "raid1", 00:13:12.191 "superblock": true, 00:13:12.191 "num_base_bdevs": 2, 00:13:12.191 "num_base_bdevs_discovered": 1, 00:13:12.191 "num_base_bdevs_operational": 1, 00:13:12.191 "base_bdevs_list": [ 00:13:12.191 { 00:13:12.191 "name": null, 00:13:12.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.191 "is_configured": false, 00:13:12.191 "data_offset": 0, 00:13:12.191 "data_size": 63488 00:13:12.191 }, 00:13:12.191 { 00:13:12.191 "name": "BaseBdev2", 00:13:12.191 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:12.191 "is_configured": true, 00:13:12.191 "data_offset": 2048, 00:13:12.191 "data_size": 63488 00:13:12.191 } 00:13:12.191 ] 00:13:12.191 }' 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.191 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.449 179.50 IOPS, 538.50 MiB/s [2024-12-07T17:29:45.831Z] 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.449 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.449 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.449 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.449 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.449 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.449 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.449 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.449 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.449 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.449 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.449 "name": "raid_bdev1", 00:13:12.449 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:12.449 "strip_size_kb": 0, 00:13:12.449 "state": "online", 00:13:12.449 "raid_level": "raid1", 00:13:12.449 "superblock": true, 00:13:12.449 "num_base_bdevs": 2, 00:13:12.449 "num_base_bdevs_discovered": 1, 00:13:12.449 "num_base_bdevs_operational": 1, 00:13:12.449 "base_bdevs_list": [ 00:13:12.449 { 00:13:12.449 "name": null, 00:13:12.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.449 "is_configured": false, 00:13:12.449 "data_offset": 0, 00:13:12.449 "data_size": 63488 00:13:12.449 }, 00:13:12.449 { 00:13:12.449 "name": "BaseBdev2", 00:13:12.449 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:12.450 "is_configured": true, 00:13:12.450 "data_offset": 2048, 00:13:12.450 "data_size": 63488 00:13:12.450 } 00:13:12.450 ] 00:13:12.450 }' 00:13:12.450 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.708 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.708 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.708 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.708 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.708 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.708 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.708 [2024-12-07 17:29:45.918599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.708 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.708 17:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:12.708 [2024-12-07 17:29:45.961917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:12.708 [2024-12-07 17:29:45.963873] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.708 [2024-12-07 17:29:46.083019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:12.708 [2024-12-07 17:29:46.083476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:12.966 [2024-12-07 17:29:46.209480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:12.966 [2024-12-07 17:29:46.209813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:13.224 [2024-12-07 17:29:46.534767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:13.481 175.33 IOPS, 526.00 MiB/s [2024-12-07T17:29:46.863Z] [2024-12-07 17:29:46.771735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.739 17:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.739 "name": "raid_bdev1", 00:13:13.739 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:13.739 "strip_size_kb": 0, 00:13:13.739 "state": "online", 00:13:13.739 "raid_level": "raid1", 00:13:13.739 "superblock": true, 00:13:13.739 "num_base_bdevs": 2, 00:13:13.739 "num_base_bdevs_discovered": 2, 00:13:13.739 "num_base_bdevs_operational": 2, 00:13:13.739 "process": { 00:13:13.739 "type": "rebuild", 00:13:13.739 "target": "spare", 00:13:13.739 "progress": { 00:13:13.739 "blocks": 10240, 00:13:13.739 "percent": 16 00:13:13.739 } 00:13:13.739 }, 00:13:13.739 "base_bdevs_list": [ 00:13:13.739 { 00:13:13.739 "name": "spare", 00:13:13.739 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:13.739 "is_configured": true, 00:13:13.739 "data_offset": 2048, 00:13:13.739 "data_size": 63488 00:13:13.739 }, 00:13:13.739 { 00:13:13.739 "name": "BaseBdev2", 00:13:13.739 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:13.739 "is_configured": true, 00:13:13.739 "data_offset": 2048, 00:13:13.739 "data_size": 63488 00:13:13.739 } 00:13:13.739 ] 00:13:13.739 }' 00:13:13.739 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.739 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.739 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.739 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.739 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:13.739 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:13.739 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:13.739 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=418 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.740 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.740 [2024-12-07 17:29:47.119302] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:13.998 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.998 "name": "raid_bdev1", 00:13:13.998 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:13.998 "strip_size_kb": 0, 00:13:13.998 "state": "online", 00:13:13.998 "raid_level": "raid1", 00:13:13.998 "superblock": true, 00:13:13.998 "num_base_bdevs": 2, 00:13:13.998 "num_base_bdevs_discovered": 2, 00:13:13.998 "num_base_bdevs_operational": 2, 00:13:13.998 "process": { 00:13:13.998 "type": "rebuild", 00:13:13.998 "target": "spare", 00:13:13.998 "progress": { 00:13:13.998 "blocks": 12288, 00:13:13.998 "percent": 19 00:13:13.998 } 00:13:13.998 }, 00:13:13.998 "base_bdevs_list": [ 00:13:13.998 { 00:13:13.998 "name": "spare", 00:13:13.998 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:13.998 "is_configured": true, 00:13:13.998 "data_offset": 2048, 00:13:13.998 "data_size": 63488 00:13:13.998 }, 00:13:13.998 { 00:13:13.998 "name": "BaseBdev2", 00:13:13.998 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:13.998 "is_configured": true, 00:13:13.998 "data_offset": 2048, 00:13:13.998 "data_size": 63488 00:13:13.998 } 00:13:13.998 ] 00:13:13.998 }' 00:13:13.998 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.998 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.998 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.998 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.998 17:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.998 [2024-12-07 17:29:47.344658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:13.998 [2024-12-07 17:29:47.345077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:14.515 143.00 IOPS, 429.00 MiB/s [2024-12-07T17:29:47.897Z] [2024-12-07 17:29:47.795635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:14.774 [2024-12-07 17:29:48.145692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.034 [2024-12-07 17:29:48.266190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.034 "name": "raid_bdev1", 00:13:15.034 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:15.034 "strip_size_kb": 0, 00:13:15.034 "state": "online", 00:13:15.034 "raid_level": "raid1", 00:13:15.034 "superblock": true, 00:13:15.034 "num_base_bdevs": 2, 00:13:15.034 "num_base_bdevs_discovered": 2, 00:13:15.034 "num_base_bdevs_operational": 2, 00:13:15.034 "process": { 00:13:15.034 "type": "rebuild", 00:13:15.034 "target": "spare", 00:13:15.034 "progress": { 00:13:15.034 "blocks": 26624, 00:13:15.034 "percent": 41 00:13:15.034 } 00:13:15.034 }, 00:13:15.034 "base_bdevs_list": [ 00:13:15.034 { 00:13:15.034 "name": "spare", 00:13:15.034 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:15.034 "is_configured": true, 00:13:15.034 "data_offset": 2048, 00:13:15.034 "data_size": 63488 00:13:15.034 }, 00:13:15.034 { 00:13:15.034 "name": "BaseBdev2", 00:13:15.034 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:15.034 "is_configured": true, 00:13:15.034 "data_offset": 2048, 00:13:15.034 "data_size": 63488 00:13:15.034 } 00:13:15.034 ] 00:13:15.034 }' 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.034 17:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.294 [2024-12-07 17:29:48.487952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:15.553 127.80 IOPS, 383.40 MiB/s [2024-12-07T17:29:48.935Z] [2024-12-07 17:29:48.899353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:16.123 [2024-12-07 17:29:49.222947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:16.123 [2024-12-07 17:29:49.223567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.123 "name": "raid_bdev1", 00:13:16.123 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:16.123 "strip_size_kb": 0, 00:13:16.123 "state": "online", 00:13:16.123 "raid_level": "raid1", 00:13:16.123 "superblock": true, 00:13:16.123 "num_base_bdevs": 2, 00:13:16.123 "num_base_bdevs_discovered": 2, 00:13:16.123 "num_base_bdevs_operational": 2, 00:13:16.123 "process": { 00:13:16.123 "type": "rebuild", 00:13:16.123 "target": "spare", 00:13:16.123 "progress": { 00:13:16.123 "blocks": 47104, 00:13:16.123 "percent": 74 00:13:16.123 } 00:13:16.123 }, 00:13:16.123 "base_bdevs_list": [ 00:13:16.123 { 00:13:16.123 "name": "spare", 00:13:16.123 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:16.123 "is_configured": true, 00:13:16.123 "data_offset": 2048, 00:13:16.123 "data_size": 63488 00:13:16.123 }, 00:13:16.123 { 00:13:16.123 "name": "BaseBdev2", 00:13:16.123 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:16.123 "is_configured": true, 00:13:16.123 "data_offset": 2048, 00:13:16.123 "data_size": 63488 00:13:16.123 } 00:13:16.123 ] 00:13:16.123 }' 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.123 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.382 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.382 17:29:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:16.382 [2024-12-07 17:29:49.571728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:17.319 113.17 IOPS, 339.50 MiB/s [2024-12-07T17:29:50.701Z] [2024-12-07 17:29:50.336746] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:17.319 [2024-12-07 17:29:50.442025] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:17.320 [2024-12-07 17:29:50.444964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.320 "name": "raid_bdev1", 00:13:17.320 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:17.320 "strip_size_kb": 0, 00:13:17.320 "state": "online", 00:13:17.320 "raid_level": "raid1", 00:13:17.320 "superblock": true, 00:13:17.320 "num_base_bdevs": 2, 00:13:17.320 "num_base_bdevs_discovered": 2, 00:13:17.320 "num_base_bdevs_operational": 2, 00:13:17.320 "base_bdevs_list": [ 00:13:17.320 { 00:13:17.320 "name": "spare", 00:13:17.320 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:17.320 "is_configured": true, 00:13:17.320 "data_offset": 2048, 00:13:17.320 "data_size": 63488 00:13:17.320 }, 00:13:17.320 { 00:13:17.320 "name": "BaseBdev2", 00:13:17.320 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:17.320 "is_configured": true, 00:13:17.320 "data_offset": 2048, 00:13:17.320 "data_size": 63488 00:13:17.320 } 00:13:17.320 ] 00:13:17.320 }' 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.320 101.71 IOPS, 305.14 MiB/s [2024-12-07T17:29:50.702Z] 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.320 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.578 "name": "raid_bdev1", 00:13:17.578 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:17.578 "strip_size_kb": 0, 00:13:17.578 "state": "online", 00:13:17.578 "raid_level": "raid1", 00:13:17.578 "superblock": true, 00:13:17.578 "num_base_bdevs": 2, 00:13:17.578 "num_base_bdevs_discovered": 2, 00:13:17.578 "num_base_bdevs_operational": 2, 00:13:17.578 "base_bdevs_list": [ 00:13:17.578 { 00:13:17.578 "name": "spare", 00:13:17.578 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:17.578 "is_configured": true, 00:13:17.578 "data_offset": 2048, 00:13:17.578 "data_size": 63488 00:13:17.578 }, 00:13:17.578 { 00:13:17.578 "name": "BaseBdev2", 00:13:17.578 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:17.578 "is_configured": true, 00:13:17.578 "data_offset": 2048, 00:13:17.578 "data_size": 63488 00:13:17.578 } 00:13:17.578 ] 00:13:17.578 }' 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.578 "name": "raid_bdev1", 00:13:17.578 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:17.578 "strip_size_kb": 0, 00:13:17.578 "state": "online", 00:13:17.578 "raid_level": "raid1", 00:13:17.578 "superblock": true, 00:13:17.578 "num_base_bdevs": 2, 00:13:17.578 "num_base_bdevs_discovered": 2, 00:13:17.578 "num_base_bdevs_operational": 2, 00:13:17.578 "base_bdevs_list": [ 00:13:17.578 { 00:13:17.578 "name": "spare", 00:13:17.578 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:17.578 "is_configured": true, 00:13:17.578 "data_offset": 2048, 00:13:17.578 "data_size": 63488 00:13:17.578 }, 00:13:17.578 { 00:13:17.578 "name": "BaseBdev2", 00:13:17.578 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:17.578 "is_configured": true, 00:13:17.578 "data_offset": 2048, 00:13:17.578 "data_size": 63488 00:13:17.578 } 00:13:17.578 ] 00:13:17.578 }' 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.578 17:29:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.835 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:17.835 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.835 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.093 [2024-12-07 17:29:51.217316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.093 [2024-12-07 17:29:51.217398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.093 00:13:18.093 Latency(us) 00:13:18.093 [2024-12-07T17:29:51.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.093 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:18.093 raid_bdev1 : 7.71 94.80 284.41 0.00 0.00 13981.55 314.80 114473.36 00:13:18.093 [2024-12-07T17:29:51.475Z] =================================================================================================================== 00:13:18.093 [2024-12-07T17:29:51.475Z] Total : 94.80 284.41 0.00 0.00 13981.55 314.80 114473.36 00:13:18.093 [2024-12-07 17:29:51.329898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.093 [2024-12-07 17:29:51.330028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.093 [2024-12-07 17:29:51.330119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.093 [2024-12-07 17:29:51.330172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:18.093 { 00:13:18.093 "results": [ 00:13:18.093 { 00:13:18.093 "job": "raid_bdev1", 00:13:18.093 "core_mask": "0x1", 00:13:18.093 "workload": "randrw", 00:13:18.093 "percentage": 50, 00:13:18.093 "status": "finished", 00:13:18.093 "queue_depth": 2, 00:13:18.093 "io_size": 3145728, 00:13:18.093 "runtime": 7.710568, 00:13:18.093 "iops": 94.80494822171337, 00:13:18.093 "mibps": 284.4148446651401, 00:13:18.093 "io_failed": 0, 00:13:18.093 "io_timeout": 0, 00:13:18.093 "avg_latency_us": 13981.548295987432, 00:13:18.093 "min_latency_us": 314.80174672489085, 00:13:18.093 "max_latency_us": 114473.36244541485 00:13:18.093 } 00:13:18.093 ], 00:13:18.093 "core_count": 1 00:13:18.093 } 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.093 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:18.352 /dev/nbd0 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.352 1+0 records in 00:13:18.352 1+0 records out 00:13:18.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541068 s, 7.6 MB/s 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:18.352 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.353 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:18.613 /dev/nbd1 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.613 1+0 records in 00:13:18.613 1+0 records out 00:13:18.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393529 s, 10.4 MB/s 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.613 17:29:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:18.872 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:18.872 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.872 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:18.872 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.872 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:18.872 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.872 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:18.872 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.132 [2024-12-07 17:29:52.504501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.132 [2024-12-07 17:29:52.504555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.132 [2024-12-07 17:29:52.504578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:19.132 [2024-12-07 17:29:52.504587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.132 [2024-12-07 17:29:52.506684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.132 [2024-12-07 17:29:52.506726] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.132 [2024-12-07 17:29:52.506830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:19.132 [2024-12-07 17:29:52.506886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.132 [2024-12-07 17:29:52.507069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.132 spare 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.132 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.391 [2024-12-07 17:29:52.606984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:19.391 [2024-12-07 17:29:52.607059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:19.391 [2024-12-07 17:29:52.607399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:19.391 [2024-12-07 17:29:52.607640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:19.391 [2024-12-07 17:29:52.607694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:19.391 [2024-12-07 17:29:52.607916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.391 "name": "raid_bdev1", 00:13:19.391 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:19.391 "strip_size_kb": 0, 00:13:19.391 "state": "online", 00:13:19.391 "raid_level": "raid1", 00:13:19.391 "superblock": true, 00:13:19.391 "num_base_bdevs": 2, 00:13:19.391 "num_base_bdevs_discovered": 2, 00:13:19.391 "num_base_bdevs_operational": 2, 00:13:19.391 "base_bdevs_list": [ 00:13:19.391 { 00:13:19.391 "name": "spare", 00:13:19.391 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:19.391 "is_configured": true, 00:13:19.391 "data_offset": 2048, 00:13:19.391 "data_size": 63488 00:13:19.391 }, 00:13:19.391 { 00:13:19.391 "name": "BaseBdev2", 00:13:19.391 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:19.391 "is_configured": true, 00:13:19.391 "data_offset": 2048, 00:13:19.391 "data_size": 63488 00:13:19.391 } 00:13:19.391 ] 00:13:19.391 }' 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.391 17:29:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.959 "name": "raid_bdev1", 00:13:19.959 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:19.959 "strip_size_kb": 0, 00:13:19.959 "state": "online", 00:13:19.959 "raid_level": "raid1", 00:13:19.959 "superblock": true, 00:13:19.959 "num_base_bdevs": 2, 00:13:19.959 "num_base_bdevs_discovered": 2, 00:13:19.959 "num_base_bdevs_operational": 2, 00:13:19.959 "base_bdevs_list": [ 00:13:19.959 { 00:13:19.959 "name": "spare", 00:13:19.959 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:19.959 "is_configured": true, 00:13:19.959 "data_offset": 2048, 00:13:19.959 "data_size": 63488 00:13:19.959 }, 00:13:19.959 { 00:13:19.959 "name": "BaseBdev2", 00:13:19.959 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:19.959 "is_configured": true, 00:13:19.959 "data_offset": 2048, 00:13:19.959 "data_size": 63488 00:13:19.959 } 00:13:19.959 ] 00:13:19.959 }' 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:19.959 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.960 [2024-12-07 17:29:53.251434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.960 "name": "raid_bdev1", 00:13:19.960 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:19.960 "strip_size_kb": 0, 00:13:19.960 "state": "online", 00:13:19.960 "raid_level": "raid1", 00:13:19.960 "superblock": true, 00:13:19.960 "num_base_bdevs": 2, 00:13:19.960 "num_base_bdevs_discovered": 1, 00:13:19.960 "num_base_bdevs_operational": 1, 00:13:19.960 "base_bdevs_list": [ 00:13:19.960 { 00:13:19.960 "name": null, 00:13:19.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.960 "is_configured": false, 00:13:19.960 "data_offset": 0, 00:13:19.960 "data_size": 63488 00:13:19.960 }, 00:13:19.960 { 00:13:19.960 "name": "BaseBdev2", 00:13:19.960 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:19.960 "is_configured": true, 00:13:19.960 "data_offset": 2048, 00:13:19.960 "data_size": 63488 00:13:19.960 } 00:13:19.960 ] 00:13:19.960 }' 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.960 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.528 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.528 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.528 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.528 [2024-12-07 17:29:53.667067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.528 [2024-12-07 17:29:53.667339] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:20.528 [2024-12-07 17:29:53.667401] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:20.528 [2024-12-07 17:29:53.667469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.529 [2024-12-07 17:29:53.683664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:20.529 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.529 17:29:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:20.529 [2024-12-07 17:29:53.685558] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.465 "name": "raid_bdev1", 00:13:21.465 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:21.465 "strip_size_kb": 0, 00:13:21.465 "state": "online", 00:13:21.465 "raid_level": "raid1", 00:13:21.465 "superblock": true, 00:13:21.465 "num_base_bdevs": 2, 00:13:21.465 "num_base_bdevs_discovered": 2, 00:13:21.465 "num_base_bdevs_operational": 2, 00:13:21.465 "process": { 00:13:21.465 "type": "rebuild", 00:13:21.465 "target": "spare", 00:13:21.465 "progress": { 00:13:21.465 "blocks": 20480, 00:13:21.465 "percent": 32 00:13:21.465 } 00:13:21.465 }, 00:13:21.465 "base_bdevs_list": [ 00:13:21.465 { 00:13:21.465 "name": "spare", 00:13:21.465 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:21.465 "is_configured": true, 00:13:21.465 "data_offset": 2048, 00:13:21.465 "data_size": 63488 00:13:21.465 }, 00:13:21.465 { 00:13:21.465 "name": "BaseBdev2", 00:13:21.465 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:21.465 "is_configured": true, 00:13:21.465 "data_offset": 2048, 00:13:21.465 "data_size": 63488 00:13:21.465 } 00:13:21.465 ] 00:13:21.465 }' 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.465 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.465 [2024-12-07 17:29:54.825610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.723 [2024-12-07 17:29:54.891150] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:21.723 [2024-12-07 17:29:54.891267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.723 [2024-12-07 17:29:54.891332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.723 [2024-12-07 17:29:54.891354] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:21.723 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.723 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.724 "name": "raid_bdev1", 00:13:21.724 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:21.724 "strip_size_kb": 0, 00:13:21.724 "state": "online", 00:13:21.724 "raid_level": "raid1", 00:13:21.724 "superblock": true, 00:13:21.724 "num_base_bdevs": 2, 00:13:21.724 "num_base_bdevs_discovered": 1, 00:13:21.724 "num_base_bdevs_operational": 1, 00:13:21.724 "base_bdevs_list": [ 00:13:21.724 { 00:13:21.724 "name": null, 00:13:21.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.724 "is_configured": false, 00:13:21.724 "data_offset": 0, 00:13:21.724 "data_size": 63488 00:13:21.724 }, 00:13:21.724 { 00:13:21.724 "name": "BaseBdev2", 00:13:21.724 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:21.724 "is_configured": true, 00:13:21.724 "data_offset": 2048, 00:13:21.724 "data_size": 63488 00:13:21.724 } 00:13:21.724 ] 00:13:21.724 }' 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.724 17:29:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.981 17:29:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:21.981 17:29:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.981 17:29:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.981 [2024-12-07 17:29:55.356645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:21.981 [2024-12-07 17:29:55.356755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.981 [2024-12-07 17:29:55.356798] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:21.981 [2024-12-07 17:29:55.356826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.981 [2024-12-07 17:29:55.357337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.981 [2024-12-07 17:29:55.357397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:21.981 [2024-12-07 17:29:55.357526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:21.981 [2024-12-07 17:29:55.357567] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:21.981 [2024-12-07 17:29:55.357609] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:21.981 [2024-12-07 17:29:55.357682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.239 [2024-12-07 17:29:55.373841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:22.239 spare 00:13:22.239 17:29:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.239 17:29:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:22.239 [2024-12-07 17:29:55.375729] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.177 "name": "raid_bdev1", 00:13:23.177 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:23.177 "strip_size_kb": 0, 00:13:23.177 "state": "online", 00:13:23.177 "raid_level": "raid1", 00:13:23.177 "superblock": true, 00:13:23.177 "num_base_bdevs": 2, 00:13:23.177 "num_base_bdevs_discovered": 2, 00:13:23.177 "num_base_bdevs_operational": 2, 00:13:23.177 "process": { 00:13:23.177 "type": "rebuild", 00:13:23.177 "target": "spare", 00:13:23.177 "progress": { 00:13:23.177 "blocks": 20480, 00:13:23.177 "percent": 32 00:13:23.177 } 00:13:23.177 }, 00:13:23.177 "base_bdevs_list": [ 00:13:23.177 { 00:13:23.177 "name": "spare", 00:13:23.177 "uuid": "72a86f2f-12f7-5c5b-82ee-494303c87418", 00:13:23.177 "is_configured": true, 00:13:23.177 "data_offset": 2048, 00:13:23.177 "data_size": 63488 00:13:23.177 }, 00:13:23.177 { 00:13:23.177 "name": "BaseBdev2", 00:13:23.177 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:23.177 "is_configured": true, 00:13:23.177 "data_offset": 2048, 00:13:23.177 "data_size": 63488 00:13:23.177 } 00:13:23.177 ] 00:13:23.177 }' 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.177 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.177 [2024-12-07 17:29:56.535517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.436 [2024-12-07 17:29:56.581083] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.436 [2024-12-07 17:29:56.581210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.436 [2024-12-07 17:29:56.581257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.436 [2024-12-07 17:29:56.581300] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.436 "name": "raid_bdev1", 00:13:23.436 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:23.436 "strip_size_kb": 0, 00:13:23.436 "state": "online", 00:13:23.436 "raid_level": "raid1", 00:13:23.436 "superblock": true, 00:13:23.436 "num_base_bdevs": 2, 00:13:23.436 "num_base_bdevs_discovered": 1, 00:13:23.436 "num_base_bdevs_operational": 1, 00:13:23.436 "base_bdevs_list": [ 00:13:23.436 { 00:13:23.436 "name": null, 00:13:23.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.436 "is_configured": false, 00:13:23.436 "data_offset": 0, 00:13:23.436 "data_size": 63488 00:13:23.436 }, 00:13:23.436 { 00:13:23.436 "name": "BaseBdev2", 00:13:23.436 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:23.436 "is_configured": true, 00:13:23.436 "data_offset": 2048, 00:13:23.436 "data_size": 63488 00:13:23.436 } 00:13:23.436 ] 00:13:23.436 }' 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.436 17:29:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.696 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.696 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.696 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.696 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.696 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.696 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.696 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.696 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.696 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.696 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.955 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.955 "name": "raid_bdev1", 00:13:23.955 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:23.955 "strip_size_kb": 0, 00:13:23.955 "state": "online", 00:13:23.955 "raid_level": "raid1", 00:13:23.955 "superblock": true, 00:13:23.955 "num_base_bdevs": 2, 00:13:23.955 "num_base_bdevs_discovered": 1, 00:13:23.955 "num_base_bdevs_operational": 1, 00:13:23.955 "base_bdevs_list": [ 00:13:23.956 { 00:13:23.956 "name": null, 00:13:23.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.956 "is_configured": false, 00:13:23.956 "data_offset": 0, 00:13:23.956 "data_size": 63488 00:13:23.956 }, 00:13:23.956 { 00:13:23.956 "name": "BaseBdev2", 00:13:23.956 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:23.956 "is_configured": true, 00:13:23.956 "data_offset": 2048, 00:13:23.956 "data_size": 63488 00:13:23.956 } 00:13:23.956 ] 00:13:23.956 }' 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.956 [2024-12-07 17:29:57.202627] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:23.956 [2024-12-07 17:29:57.202735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.956 [2024-12-07 17:29:57.202780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:23.956 [2024-12-07 17:29:57.202829] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.956 [2024-12-07 17:29:57.203338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.956 [2024-12-07 17:29:57.203400] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:23.956 [2024-12-07 17:29:57.203492] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:23.956 [2024-12-07 17:29:57.203512] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:23.956 [2024-12-07 17:29:57.203520] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:23.956 [2024-12-07 17:29:57.203538] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:23.956 BaseBdev1 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.956 17:29:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.910 "name": "raid_bdev1", 00:13:24.910 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:24.910 "strip_size_kb": 0, 00:13:24.910 "state": "online", 00:13:24.910 "raid_level": "raid1", 00:13:24.910 "superblock": true, 00:13:24.910 "num_base_bdevs": 2, 00:13:24.910 "num_base_bdevs_discovered": 1, 00:13:24.910 "num_base_bdevs_operational": 1, 00:13:24.910 "base_bdevs_list": [ 00:13:24.910 { 00:13:24.910 "name": null, 00:13:24.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.910 "is_configured": false, 00:13:24.910 "data_offset": 0, 00:13:24.910 "data_size": 63488 00:13:24.910 }, 00:13:24.910 { 00:13:24.910 "name": "BaseBdev2", 00:13:24.910 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:24.910 "is_configured": true, 00:13:24.910 "data_offset": 2048, 00:13:24.910 "data_size": 63488 00:13:24.910 } 00:13:24.910 ] 00:13:24.910 }' 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.910 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.495 "name": "raid_bdev1", 00:13:25.495 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:25.495 "strip_size_kb": 0, 00:13:25.495 "state": "online", 00:13:25.495 "raid_level": "raid1", 00:13:25.495 "superblock": true, 00:13:25.495 "num_base_bdevs": 2, 00:13:25.495 "num_base_bdevs_discovered": 1, 00:13:25.495 "num_base_bdevs_operational": 1, 00:13:25.495 "base_bdevs_list": [ 00:13:25.495 { 00:13:25.495 "name": null, 00:13:25.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.495 "is_configured": false, 00:13:25.495 "data_offset": 0, 00:13:25.495 "data_size": 63488 00:13:25.495 }, 00:13:25.495 { 00:13:25.495 "name": "BaseBdev2", 00:13:25.495 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:25.495 "is_configured": true, 00:13:25.495 "data_offset": 2048, 00:13:25.495 "data_size": 63488 00:13:25.495 } 00:13:25.495 ] 00:13:25.495 }' 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.495 [2024-12-07 17:29:58.744375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.495 [2024-12-07 17:29:58.744620] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:25.495 [2024-12-07 17:29:58.744678] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:25.495 request: 00:13:25.495 { 00:13:25.495 "base_bdev": "BaseBdev1", 00:13:25.495 "raid_bdev": "raid_bdev1", 00:13:25.495 "method": "bdev_raid_add_base_bdev", 00:13:25.495 "req_id": 1 00:13:25.495 } 00:13:25.495 Got JSON-RPC error response 00:13:25.495 response: 00:13:25.495 { 00:13:25.495 "code": -22, 00:13:25.495 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:25.495 } 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:25.495 17:29:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.435 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.436 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.436 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.436 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.436 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.436 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.436 "name": "raid_bdev1", 00:13:26.436 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:26.436 "strip_size_kb": 0, 00:13:26.436 "state": "online", 00:13:26.436 "raid_level": "raid1", 00:13:26.436 "superblock": true, 00:13:26.436 "num_base_bdevs": 2, 00:13:26.436 "num_base_bdevs_discovered": 1, 00:13:26.436 "num_base_bdevs_operational": 1, 00:13:26.436 "base_bdevs_list": [ 00:13:26.436 { 00:13:26.436 "name": null, 00:13:26.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.436 "is_configured": false, 00:13:26.436 "data_offset": 0, 00:13:26.436 "data_size": 63488 00:13:26.436 }, 00:13:26.436 { 00:13:26.436 "name": "BaseBdev2", 00:13:26.436 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:26.436 "is_configured": true, 00:13:26.436 "data_offset": 2048, 00:13:26.436 "data_size": 63488 00:13:26.436 } 00:13:26.436 ] 00:13:26.436 }' 00:13:26.436 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.436 17:29:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.001 "name": "raid_bdev1", 00:13:27.001 "uuid": "9cba6776-2670-4b28-920d-9842f4d44d91", 00:13:27.001 "strip_size_kb": 0, 00:13:27.001 "state": "online", 00:13:27.001 "raid_level": "raid1", 00:13:27.001 "superblock": true, 00:13:27.001 "num_base_bdevs": 2, 00:13:27.001 "num_base_bdevs_discovered": 1, 00:13:27.001 "num_base_bdevs_operational": 1, 00:13:27.001 "base_bdevs_list": [ 00:13:27.001 { 00:13:27.001 "name": null, 00:13:27.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.001 "is_configured": false, 00:13:27.001 "data_offset": 0, 00:13:27.001 "data_size": 63488 00:13:27.001 }, 00:13:27.001 { 00:13:27.001 "name": "BaseBdev2", 00:13:27.001 "uuid": "76668694-a89a-5dd2-ab44-62261b111c31", 00:13:27.001 "is_configured": true, 00:13:27.001 "data_offset": 2048, 00:13:27.001 "data_size": 63488 00:13:27.001 } 00:13:27.001 ] 00:13:27.001 }' 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76858 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76858 ']' 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76858 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76858 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.001 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.259 killing process with pid 76858 00:13:27.259 Received shutdown signal, test time was about 16.800512 seconds 00:13:27.259 00:13:27.259 Latency(us) 00:13:27.259 [2024-12-07T17:30:00.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.259 [2024-12-07T17:30:00.641Z] =================================================================================================================== 00:13:27.259 [2024-12-07T17:30:00.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.259 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76858' 00:13:27.260 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76858 00:13:27.260 [2024-12-07 17:30:00.381536] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.260 17:30:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76858 00:13:27.260 [2024-12-07 17:30:00.381676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.260 [2024-12-07 17:30:00.381751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.260 [2024-12-07 17:30:00.381761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:27.260 [2024-12-07 17:30:00.626451] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:28.636 ************************************ 00:13:28.636 END TEST raid_rebuild_test_sb_io 00:13:28.636 ************************************ 00:13:28.636 00:13:28.636 real 0m19.978s 00:13:28.636 user 0m25.945s 00:13:28.636 sys 0m2.228s 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.636 17:30:01 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:28.636 17:30:01 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:28.636 17:30:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:28.636 17:30:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.636 17:30:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.636 ************************************ 00:13:28.636 START TEST raid_rebuild_test 00:13:28.636 ************************************ 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77542 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77542 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77542 ']' 00:13:28.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.636 17:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.636 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:28.636 Zero copy mechanism will not be used. 00:13:28.636 [2024-12-07 17:30:01.964746] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:28.636 [2024-12-07 17:30:01.964867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77542 ] 00:13:28.894 [2024-12-07 17:30:02.136764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.894 [2024-12-07 17:30:02.251160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.152 [2024-12-07 17:30:02.461082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.152 [2024-12-07 17:30:02.461141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 BaseBdev1_malloc 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 [2024-12-07 17:30:02.854735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.718 [2024-12-07 17:30:02.854848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.718 [2024-12-07 17:30:02.854888] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:29.718 [2024-12-07 17:30:02.854919] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.718 [2024-12-07 17:30:02.857169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.718 [2024-12-07 17:30:02.857248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.718 BaseBdev1 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 BaseBdev2_malloc 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 [2024-12-07 17:30:02.910161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:29.718 [2024-12-07 17:30:02.910225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.718 [2024-12-07 17:30:02.910248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:29.718 [2024-12-07 17:30:02.910259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.718 [2024-12-07 17:30:02.912424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.718 [2024-12-07 17:30:02.912469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:29.718 BaseBdev2 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 BaseBdev3_malloc 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 [2024-12-07 17:30:02.979599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:29.718 [2024-12-07 17:30:02.979733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.718 [2024-12-07 17:30:02.979779] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:29.718 [2024-12-07 17:30:02.979821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.718 [2024-12-07 17:30:02.981991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.718 [2024-12-07 17:30:02.982065] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:29.718 BaseBdev3 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.718 17:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 BaseBdev4_malloc 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 [2024-12-07 17:30:03.035735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:29.718 [2024-12-07 17:30:03.035800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.718 [2024-12-07 17:30:03.035821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:29.718 [2024-12-07 17:30:03.035832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.718 [2024-12-07 17:30:03.037979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.718 BaseBdev4 00:13:29.718 [2024-12-07 17:30:03.038097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 spare_malloc 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 spare_delay 00:13:29.718 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.977 [2024-12-07 17:30:03.104826] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:29.977 [2024-12-07 17:30:03.104926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.977 [2024-12-07 17:30:03.104974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:29.977 [2024-12-07 17:30:03.105007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.977 [2024-12-07 17:30:03.107092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.977 [2024-12-07 17:30:03.107168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:29.977 spare 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.977 [2024-12-07 17:30:03.116862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.977 [2024-12-07 17:30:03.118871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.977 [2024-12-07 17:30:03.118993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.977 [2024-12-07 17:30:03.119074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:29.977 [2024-12-07 17:30:03.119199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:29.977 [2024-12-07 17:30:03.119253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:29.977 [2024-12-07 17:30:03.119564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:29.977 [2024-12-07 17:30:03.119800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:29.977 [2024-12-07 17:30:03.119851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:29.977 [2024-12-07 17:30:03.120082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.977 "name": "raid_bdev1", 00:13:29.977 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:29.977 "strip_size_kb": 0, 00:13:29.977 "state": "online", 00:13:29.977 "raid_level": "raid1", 00:13:29.977 "superblock": false, 00:13:29.977 "num_base_bdevs": 4, 00:13:29.977 "num_base_bdevs_discovered": 4, 00:13:29.977 "num_base_bdevs_operational": 4, 00:13:29.977 "base_bdevs_list": [ 00:13:29.977 { 00:13:29.977 "name": "BaseBdev1", 00:13:29.977 "uuid": "75bac4c8-aa6a-52ae-b3d1-12678f8e0171", 00:13:29.977 "is_configured": true, 00:13:29.977 "data_offset": 0, 00:13:29.977 "data_size": 65536 00:13:29.977 }, 00:13:29.977 { 00:13:29.977 "name": "BaseBdev2", 00:13:29.977 "uuid": "0c4ab6ee-3f28-542e-b6bc-2d5e79d4fb8a", 00:13:29.977 "is_configured": true, 00:13:29.977 "data_offset": 0, 00:13:29.977 "data_size": 65536 00:13:29.977 }, 00:13:29.977 { 00:13:29.977 "name": "BaseBdev3", 00:13:29.977 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:29.977 "is_configured": true, 00:13:29.977 "data_offset": 0, 00:13:29.977 "data_size": 65536 00:13:29.977 }, 00:13:29.977 { 00:13:29.977 "name": "BaseBdev4", 00:13:29.977 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:29.977 "is_configured": true, 00:13:29.977 "data_offset": 0, 00:13:29.977 "data_size": 65536 00:13:29.977 } 00:13:29.977 ] 00:13:29.977 }' 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.977 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.235 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.235 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:30.235 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.235 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.235 [2024-12-07 17:30:03.560483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.235 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.235 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:30.235 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:30.235 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.235 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.235 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:30.498 [2024-12-07 17:30:03.835693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:30.498 /dev/nbd0 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.498 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.756 1+0 records in 00:13:30.756 1+0 records out 00:13:30.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372867 s, 11.0 MB/s 00:13:30.756 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.756 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:30.756 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.756 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.756 17:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:30.756 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.756 17:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.756 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:30.756 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:30.756 17:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:37.315 65536+0 records in 00:13:37.315 65536+0 records out 00:13:37.315 33554432 bytes (34 MB, 32 MiB) copied, 6.0035 s, 5.6 MB/s 00:13:37.315 17:30:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:37.315 17:30:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.315 17:30:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:37.315 17:30:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.315 17:30:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:37.315 17:30:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.315 17:30:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.315 [2024-12-07 17:30:10.115897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.315 [2024-12-07 17:30:10.127999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.315 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.315 "name": "raid_bdev1", 00:13:37.315 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:37.315 "strip_size_kb": 0, 00:13:37.315 "state": "online", 00:13:37.315 "raid_level": "raid1", 00:13:37.315 "superblock": false, 00:13:37.315 "num_base_bdevs": 4, 00:13:37.315 "num_base_bdevs_discovered": 3, 00:13:37.315 "num_base_bdevs_operational": 3, 00:13:37.315 "base_bdevs_list": [ 00:13:37.315 { 00:13:37.315 "name": null, 00:13:37.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.315 "is_configured": false, 00:13:37.315 "data_offset": 0, 00:13:37.315 "data_size": 65536 00:13:37.315 }, 00:13:37.315 { 00:13:37.315 "name": "BaseBdev2", 00:13:37.315 "uuid": "0c4ab6ee-3f28-542e-b6bc-2d5e79d4fb8a", 00:13:37.315 "is_configured": true, 00:13:37.315 "data_offset": 0, 00:13:37.315 "data_size": 65536 00:13:37.315 }, 00:13:37.315 { 00:13:37.315 "name": "BaseBdev3", 00:13:37.315 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:37.315 "is_configured": true, 00:13:37.315 "data_offset": 0, 00:13:37.315 "data_size": 65536 00:13:37.315 }, 00:13:37.315 { 00:13:37.315 "name": "BaseBdev4", 00:13:37.315 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:37.315 "is_configured": true, 00:13:37.315 "data_offset": 0, 00:13:37.316 "data_size": 65536 00:13:37.316 } 00:13:37.316 ] 00:13:37.316 }' 00:13:37.316 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.316 17:30:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.316 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:37.316 17:30:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.316 17:30:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.316 [2024-12-07 17:30:10.535326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.316 [2024-12-07 17:30:10.551153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:37.316 17:30:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.316 17:30:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:37.316 [2024-12-07 17:30:10.553094] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.253 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.253 "name": "raid_bdev1", 00:13:38.253 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:38.253 "strip_size_kb": 0, 00:13:38.253 "state": "online", 00:13:38.253 "raid_level": "raid1", 00:13:38.253 "superblock": false, 00:13:38.253 "num_base_bdevs": 4, 00:13:38.253 "num_base_bdevs_discovered": 4, 00:13:38.253 "num_base_bdevs_operational": 4, 00:13:38.253 "process": { 00:13:38.253 "type": "rebuild", 00:13:38.253 "target": "spare", 00:13:38.253 "progress": { 00:13:38.253 "blocks": 20480, 00:13:38.253 "percent": 31 00:13:38.253 } 00:13:38.253 }, 00:13:38.253 "base_bdevs_list": [ 00:13:38.253 { 00:13:38.253 "name": "spare", 00:13:38.253 "uuid": "bbe25be5-18f7-5864-af9c-25f1a16aa769", 00:13:38.253 "is_configured": true, 00:13:38.253 "data_offset": 0, 00:13:38.253 "data_size": 65536 00:13:38.253 }, 00:13:38.253 { 00:13:38.253 "name": "BaseBdev2", 00:13:38.254 "uuid": "0c4ab6ee-3f28-542e-b6bc-2d5e79d4fb8a", 00:13:38.254 "is_configured": true, 00:13:38.254 "data_offset": 0, 00:13:38.254 "data_size": 65536 00:13:38.254 }, 00:13:38.254 { 00:13:38.254 "name": "BaseBdev3", 00:13:38.254 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:38.254 "is_configured": true, 00:13:38.254 "data_offset": 0, 00:13:38.254 "data_size": 65536 00:13:38.254 }, 00:13:38.254 { 00:13:38.254 "name": "BaseBdev4", 00:13:38.254 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:38.254 "is_configured": true, 00:13:38.254 "data_offset": 0, 00:13:38.254 "data_size": 65536 00:13:38.254 } 00:13:38.254 ] 00:13:38.254 }' 00:13:38.254 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.513 [2024-12-07 17:30:11.720306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.513 [2024-12-07 17:30:11.758548] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:38.513 [2024-12-07 17:30:11.758678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.513 [2024-12-07 17:30:11.758717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.513 [2024-12-07 17:30:11.758742] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.513 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.513 "name": "raid_bdev1", 00:13:38.513 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:38.513 "strip_size_kb": 0, 00:13:38.513 "state": "online", 00:13:38.513 "raid_level": "raid1", 00:13:38.513 "superblock": false, 00:13:38.513 "num_base_bdevs": 4, 00:13:38.513 "num_base_bdevs_discovered": 3, 00:13:38.513 "num_base_bdevs_operational": 3, 00:13:38.514 "base_bdevs_list": [ 00:13:38.514 { 00:13:38.514 "name": null, 00:13:38.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.514 "is_configured": false, 00:13:38.514 "data_offset": 0, 00:13:38.514 "data_size": 65536 00:13:38.514 }, 00:13:38.514 { 00:13:38.514 "name": "BaseBdev2", 00:13:38.514 "uuid": "0c4ab6ee-3f28-542e-b6bc-2d5e79d4fb8a", 00:13:38.514 "is_configured": true, 00:13:38.514 "data_offset": 0, 00:13:38.514 "data_size": 65536 00:13:38.514 }, 00:13:38.514 { 00:13:38.514 "name": "BaseBdev3", 00:13:38.514 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:38.514 "is_configured": true, 00:13:38.514 "data_offset": 0, 00:13:38.514 "data_size": 65536 00:13:38.514 }, 00:13:38.514 { 00:13:38.514 "name": "BaseBdev4", 00:13:38.514 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:38.514 "is_configured": true, 00:13:38.514 "data_offset": 0, 00:13:38.514 "data_size": 65536 00:13:38.514 } 00:13:38.514 ] 00:13:38.514 }' 00:13:38.514 17:30:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.514 17:30:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.083 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.083 "name": "raid_bdev1", 00:13:39.083 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:39.083 "strip_size_kb": 0, 00:13:39.083 "state": "online", 00:13:39.083 "raid_level": "raid1", 00:13:39.083 "superblock": false, 00:13:39.083 "num_base_bdevs": 4, 00:13:39.083 "num_base_bdevs_discovered": 3, 00:13:39.083 "num_base_bdevs_operational": 3, 00:13:39.083 "base_bdevs_list": [ 00:13:39.083 { 00:13:39.084 "name": null, 00:13:39.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.084 "is_configured": false, 00:13:39.084 "data_offset": 0, 00:13:39.084 "data_size": 65536 00:13:39.084 }, 00:13:39.084 { 00:13:39.084 "name": "BaseBdev2", 00:13:39.084 "uuid": "0c4ab6ee-3f28-542e-b6bc-2d5e79d4fb8a", 00:13:39.084 "is_configured": true, 00:13:39.084 "data_offset": 0, 00:13:39.084 "data_size": 65536 00:13:39.084 }, 00:13:39.084 { 00:13:39.084 "name": "BaseBdev3", 00:13:39.084 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:39.084 "is_configured": true, 00:13:39.084 "data_offset": 0, 00:13:39.084 "data_size": 65536 00:13:39.084 }, 00:13:39.084 { 00:13:39.084 "name": "BaseBdev4", 00:13:39.084 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:39.084 "is_configured": true, 00:13:39.084 "data_offset": 0, 00:13:39.084 "data_size": 65536 00:13:39.084 } 00:13:39.084 ] 00:13:39.084 }' 00:13:39.084 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.084 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.084 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.084 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.084 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:39.084 17:30:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.084 17:30:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.084 [2024-12-07 17:30:12.336112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.084 [2024-12-07 17:30:12.352218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:39.084 17:30:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.084 17:30:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:39.084 [2024-12-07 17:30:12.354332] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.023 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.023 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.023 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.023 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.023 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.023 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.023 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.023 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.023 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.023 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.282 "name": "raid_bdev1", 00:13:40.282 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:40.282 "strip_size_kb": 0, 00:13:40.282 "state": "online", 00:13:40.282 "raid_level": "raid1", 00:13:40.282 "superblock": false, 00:13:40.282 "num_base_bdevs": 4, 00:13:40.282 "num_base_bdevs_discovered": 4, 00:13:40.282 "num_base_bdevs_operational": 4, 00:13:40.282 "process": { 00:13:40.282 "type": "rebuild", 00:13:40.282 "target": "spare", 00:13:40.282 "progress": { 00:13:40.282 "blocks": 20480, 00:13:40.282 "percent": 31 00:13:40.282 } 00:13:40.282 }, 00:13:40.282 "base_bdevs_list": [ 00:13:40.282 { 00:13:40.282 "name": "spare", 00:13:40.282 "uuid": "bbe25be5-18f7-5864-af9c-25f1a16aa769", 00:13:40.282 "is_configured": true, 00:13:40.282 "data_offset": 0, 00:13:40.282 "data_size": 65536 00:13:40.282 }, 00:13:40.282 { 00:13:40.282 "name": "BaseBdev2", 00:13:40.282 "uuid": "0c4ab6ee-3f28-542e-b6bc-2d5e79d4fb8a", 00:13:40.282 "is_configured": true, 00:13:40.282 "data_offset": 0, 00:13:40.282 "data_size": 65536 00:13:40.282 }, 00:13:40.282 { 00:13:40.282 "name": "BaseBdev3", 00:13:40.282 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:40.282 "is_configured": true, 00:13:40.282 "data_offset": 0, 00:13:40.282 "data_size": 65536 00:13:40.282 }, 00:13:40.282 { 00:13:40.282 "name": "BaseBdev4", 00:13:40.282 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:40.282 "is_configured": true, 00:13:40.282 "data_offset": 0, 00:13:40.282 "data_size": 65536 00:13:40.282 } 00:13:40.282 ] 00:13:40.282 }' 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.282 [2024-12-07 17:30:13.521562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:40.282 [2024-12-07 17:30:13.559730] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.282 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.283 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.283 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.283 "name": "raid_bdev1", 00:13:40.283 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:40.283 "strip_size_kb": 0, 00:13:40.283 "state": "online", 00:13:40.283 "raid_level": "raid1", 00:13:40.283 "superblock": false, 00:13:40.283 "num_base_bdevs": 4, 00:13:40.283 "num_base_bdevs_discovered": 3, 00:13:40.283 "num_base_bdevs_operational": 3, 00:13:40.283 "process": { 00:13:40.283 "type": "rebuild", 00:13:40.283 "target": "spare", 00:13:40.283 "progress": { 00:13:40.283 "blocks": 24576, 00:13:40.283 "percent": 37 00:13:40.283 } 00:13:40.283 }, 00:13:40.283 "base_bdevs_list": [ 00:13:40.283 { 00:13:40.283 "name": "spare", 00:13:40.283 "uuid": "bbe25be5-18f7-5864-af9c-25f1a16aa769", 00:13:40.283 "is_configured": true, 00:13:40.283 "data_offset": 0, 00:13:40.283 "data_size": 65536 00:13:40.283 }, 00:13:40.283 { 00:13:40.283 "name": null, 00:13:40.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.283 "is_configured": false, 00:13:40.283 "data_offset": 0, 00:13:40.283 "data_size": 65536 00:13:40.283 }, 00:13:40.283 { 00:13:40.283 "name": "BaseBdev3", 00:13:40.283 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:40.283 "is_configured": true, 00:13:40.283 "data_offset": 0, 00:13:40.283 "data_size": 65536 00:13:40.283 }, 00:13:40.283 { 00:13:40.283 "name": "BaseBdev4", 00:13:40.283 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:40.283 "is_configured": true, 00:13:40.283 "data_offset": 0, 00:13:40.283 "data_size": 65536 00:13:40.283 } 00:13:40.283 ] 00:13:40.283 }' 00:13:40.283 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.283 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.283 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=444 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.542 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.543 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.543 17:30:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.543 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.543 "name": "raid_bdev1", 00:13:40.543 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:40.543 "strip_size_kb": 0, 00:13:40.543 "state": "online", 00:13:40.543 "raid_level": "raid1", 00:13:40.543 "superblock": false, 00:13:40.543 "num_base_bdevs": 4, 00:13:40.543 "num_base_bdevs_discovered": 3, 00:13:40.543 "num_base_bdevs_operational": 3, 00:13:40.543 "process": { 00:13:40.543 "type": "rebuild", 00:13:40.543 "target": "spare", 00:13:40.543 "progress": { 00:13:40.543 "blocks": 26624, 00:13:40.543 "percent": 40 00:13:40.543 } 00:13:40.543 }, 00:13:40.543 "base_bdevs_list": [ 00:13:40.543 { 00:13:40.543 "name": "spare", 00:13:40.543 "uuid": "bbe25be5-18f7-5864-af9c-25f1a16aa769", 00:13:40.543 "is_configured": true, 00:13:40.543 "data_offset": 0, 00:13:40.543 "data_size": 65536 00:13:40.543 }, 00:13:40.543 { 00:13:40.543 "name": null, 00:13:40.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.543 "is_configured": false, 00:13:40.543 "data_offset": 0, 00:13:40.543 "data_size": 65536 00:13:40.543 }, 00:13:40.543 { 00:13:40.543 "name": "BaseBdev3", 00:13:40.543 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:40.543 "is_configured": true, 00:13:40.543 "data_offset": 0, 00:13:40.543 "data_size": 65536 00:13:40.543 }, 00:13:40.543 { 00:13:40.543 "name": "BaseBdev4", 00:13:40.543 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:40.543 "is_configured": true, 00:13:40.543 "data_offset": 0, 00:13:40.543 "data_size": 65536 00:13:40.543 } 00:13:40.543 ] 00:13:40.543 }' 00:13:40.543 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.543 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.543 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.543 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.543 17:30:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.483 17:30:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.743 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.743 "name": "raid_bdev1", 00:13:41.743 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:41.743 "strip_size_kb": 0, 00:13:41.743 "state": "online", 00:13:41.743 "raid_level": "raid1", 00:13:41.743 "superblock": false, 00:13:41.743 "num_base_bdevs": 4, 00:13:41.743 "num_base_bdevs_discovered": 3, 00:13:41.743 "num_base_bdevs_operational": 3, 00:13:41.743 "process": { 00:13:41.743 "type": "rebuild", 00:13:41.743 "target": "spare", 00:13:41.743 "progress": { 00:13:41.743 "blocks": 49152, 00:13:41.743 "percent": 75 00:13:41.743 } 00:13:41.743 }, 00:13:41.743 "base_bdevs_list": [ 00:13:41.743 { 00:13:41.743 "name": "spare", 00:13:41.743 "uuid": "bbe25be5-18f7-5864-af9c-25f1a16aa769", 00:13:41.743 "is_configured": true, 00:13:41.743 "data_offset": 0, 00:13:41.743 "data_size": 65536 00:13:41.743 }, 00:13:41.743 { 00:13:41.743 "name": null, 00:13:41.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.743 "is_configured": false, 00:13:41.743 "data_offset": 0, 00:13:41.743 "data_size": 65536 00:13:41.743 }, 00:13:41.743 { 00:13:41.743 "name": "BaseBdev3", 00:13:41.743 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:41.743 "is_configured": true, 00:13:41.743 "data_offset": 0, 00:13:41.743 "data_size": 65536 00:13:41.743 }, 00:13:41.743 { 00:13:41.743 "name": "BaseBdev4", 00:13:41.743 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:41.743 "is_configured": true, 00:13:41.743 "data_offset": 0, 00:13:41.743 "data_size": 65536 00:13:41.743 } 00:13:41.743 ] 00:13:41.743 }' 00:13:41.743 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.743 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.743 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.743 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.743 17:30:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.310 [2024-12-07 17:30:15.568726] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.310 [2024-12-07 17:30:15.568810] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.310 [2024-12-07 17:30:15.568872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.879 17:30:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.879 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.879 "name": "raid_bdev1", 00:13:42.879 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:42.879 "strip_size_kb": 0, 00:13:42.879 "state": "online", 00:13:42.879 "raid_level": "raid1", 00:13:42.879 "superblock": false, 00:13:42.879 "num_base_bdevs": 4, 00:13:42.879 "num_base_bdevs_discovered": 3, 00:13:42.879 "num_base_bdevs_operational": 3, 00:13:42.879 "base_bdevs_list": [ 00:13:42.879 { 00:13:42.879 "name": "spare", 00:13:42.879 "uuid": "bbe25be5-18f7-5864-af9c-25f1a16aa769", 00:13:42.879 "is_configured": true, 00:13:42.879 "data_offset": 0, 00:13:42.879 "data_size": 65536 00:13:42.879 }, 00:13:42.879 { 00:13:42.879 "name": null, 00:13:42.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.879 "is_configured": false, 00:13:42.879 "data_offset": 0, 00:13:42.879 "data_size": 65536 00:13:42.879 }, 00:13:42.879 { 00:13:42.879 "name": "BaseBdev3", 00:13:42.879 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:42.879 "is_configured": true, 00:13:42.879 "data_offset": 0, 00:13:42.879 "data_size": 65536 00:13:42.879 }, 00:13:42.879 { 00:13:42.879 "name": "BaseBdev4", 00:13:42.879 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:42.879 "is_configured": true, 00:13:42.879 "data_offset": 0, 00:13:42.879 "data_size": 65536 00:13:42.879 } 00:13:42.879 ] 00:13:42.879 }' 00:13:42.879 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.879 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:42.879 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.879 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:42.879 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:42.879 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.879 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.879 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.879 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.880 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.880 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.880 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.880 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.880 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.880 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.880 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.880 "name": "raid_bdev1", 00:13:42.880 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:42.880 "strip_size_kb": 0, 00:13:42.880 "state": "online", 00:13:42.880 "raid_level": "raid1", 00:13:42.880 "superblock": false, 00:13:42.880 "num_base_bdevs": 4, 00:13:42.880 "num_base_bdevs_discovered": 3, 00:13:42.880 "num_base_bdevs_operational": 3, 00:13:42.880 "base_bdevs_list": [ 00:13:42.880 { 00:13:42.880 "name": "spare", 00:13:42.880 "uuid": "bbe25be5-18f7-5864-af9c-25f1a16aa769", 00:13:42.880 "is_configured": true, 00:13:42.880 "data_offset": 0, 00:13:42.880 "data_size": 65536 00:13:42.880 }, 00:13:42.880 { 00:13:42.880 "name": null, 00:13:42.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.880 "is_configured": false, 00:13:42.880 "data_offset": 0, 00:13:42.880 "data_size": 65536 00:13:42.880 }, 00:13:42.880 { 00:13:42.880 "name": "BaseBdev3", 00:13:42.880 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:42.880 "is_configured": true, 00:13:42.880 "data_offset": 0, 00:13:42.880 "data_size": 65536 00:13:42.880 }, 00:13:42.880 { 00:13:42.880 "name": "BaseBdev4", 00:13:42.880 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:42.880 "is_configured": true, 00:13:42.880 "data_offset": 0, 00:13:42.880 "data_size": 65536 00:13:42.880 } 00:13:42.880 ] 00:13:42.880 }' 00:13:42.880 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.880 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.880 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.140 "name": "raid_bdev1", 00:13:43.140 "uuid": "274d10b1-2af3-4e0b-9255-19c7df2bc84b", 00:13:43.140 "strip_size_kb": 0, 00:13:43.140 "state": "online", 00:13:43.140 "raid_level": "raid1", 00:13:43.140 "superblock": false, 00:13:43.140 "num_base_bdevs": 4, 00:13:43.140 "num_base_bdevs_discovered": 3, 00:13:43.140 "num_base_bdevs_operational": 3, 00:13:43.140 "base_bdevs_list": [ 00:13:43.140 { 00:13:43.140 "name": "spare", 00:13:43.140 "uuid": "bbe25be5-18f7-5864-af9c-25f1a16aa769", 00:13:43.140 "is_configured": true, 00:13:43.140 "data_offset": 0, 00:13:43.140 "data_size": 65536 00:13:43.140 }, 00:13:43.140 { 00:13:43.140 "name": null, 00:13:43.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.140 "is_configured": false, 00:13:43.140 "data_offset": 0, 00:13:43.140 "data_size": 65536 00:13:43.140 }, 00:13:43.140 { 00:13:43.140 "name": "BaseBdev3", 00:13:43.140 "uuid": "6208aa3e-70f6-5e2b-8f63-e99bf1fa8971", 00:13:43.140 "is_configured": true, 00:13:43.140 "data_offset": 0, 00:13:43.140 "data_size": 65536 00:13:43.140 }, 00:13:43.140 { 00:13:43.140 "name": "BaseBdev4", 00:13:43.140 "uuid": "61fb71a3-a54c-5cc0-87c5-6b1ab37ced1b", 00:13:43.140 "is_configured": true, 00:13:43.140 "data_offset": 0, 00:13:43.140 "data_size": 65536 00:13:43.140 } 00:13:43.140 ] 00:13:43.140 }' 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.140 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.400 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.400 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.400 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.400 [2024-12-07 17:30:16.725775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.400 [2024-12-07 17:30:16.725857] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.400 [2024-12-07 17:30:16.725977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.400 [2024-12-07 17:30:16.726104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.400 [2024-12-07 17:30:16.726170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.400 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.400 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.400 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.400 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.400 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.400 17:30:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.659 17:30:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:43.659 /dev/nbd0 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.659 1+0 records in 00:13:43.659 1+0 records out 00:13:43.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398978 s, 10.3 MB/s 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:43.659 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.660 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.660 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:43.660 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.660 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.660 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:43.920 /dev/nbd1 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.920 1+0 records in 00:13:43.920 1+0 records out 00:13:43.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222422 s, 18.4 MB/s 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.920 17:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:44.180 17:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:44.180 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.180 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.180 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.180 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:44.180 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.180 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:44.440 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.440 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.440 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.440 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.440 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.440 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.440 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:44.440 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.440 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.440 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.700 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77542 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77542 ']' 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77542 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77542 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77542' 00:13:44.701 killing process with pid 77542 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77542 00:13:44.701 Received shutdown signal, test time was about 60.000000 seconds 00:13:44.701 00:13:44.701 Latency(us) 00:13:44.701 [2024-12-07T17:30:18.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.701 [2024-12-07T17:30:18.083Z] =================================================================================================================== 00:13:44.701 [2024-12-07T17:30:18.083Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.701 17:30:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77542 00:13:44.701 [2024-12-07 17:30:17.943182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.272 [2024-12-07 17:30:18.401284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:46.213 00:13:46.213 real 0m17.611s 00:13:46.213 user 0m19.358s 00:13:46.213 sys 0m3.591s 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.213 ************************************ 00:13:46.213 END TEST raid_rebuild_test 00:13:46.213 ************************************ 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.213 17:30:19 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:46.213 17:30:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:46.213 17:30:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.213 17:30:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.213 ************************************ 00:13:46.213 START TEST raid_rebuild_test_sb 00:13:46.213 ************************************ 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78001 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78001 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78001 ']' 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.213 17:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.474 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:46.474 Zero copy mechanism will not be used. 00:13:46.474 [2024-12-07 17:30:19.651270] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:46.474 [2024-12-07 17:30:19.651402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78001 ] 00:13:46.474 [2024-12-07 17:30:19.825232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.734 [2024-12-07 17:30:19.931434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.734 [2024-12-07 17:30:20.110630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.734 [2024-12-07 17:30:20.110686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.305 BaseBdev1_malloc 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.305 [2024-12-07 17:30:20.507975] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.305 [2024-12-07 17:30:20.508033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.305 [2024-12-07 17:30:20.508055] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:47.305 [2024-12-07 17:30:20.508066] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.305 [2024-12-07 17:30:20.510055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.305 [2024-12-07 17:30:20.510159] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.305 BaseBdev1 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.305 BaseBdev2_malloc 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.305 [2024-12-07 17:30:20.561438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:47.305 [2024-12-07 17:30:20.561495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.305 [2024-12-07 17:30:20.561515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:47.305 [2024-12-07 17:30:20.561525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.305 [2024-12-07 17:30:20.563594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.305 [2024-12-07 17:30:20.563676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.305 BaseBdev2 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.305 BaseBdev3_malloc 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.305 [2024-12-07 17:30:20.628144] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:47.305 [2024-12-07 17:30:20.628208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.305 [2024-12-07 17:30:20.628228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:47.305 [2024-12-07 17:30:20.628239] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.305 [2024-12-07 17:30:20.630329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.305 [2024-12-07 17:30:20.630368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.305 BaseBdev3 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.305 BaseBdev4_malloc 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.305 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.305 [2024-12-07 17:30:20.683218] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:47.305 [2024-12-07 17:30:20.683275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.305 [2024-12-07 17:30:20.683296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:47.305 [2024-12-07 17:30:20.683306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.566 [2024-12-07 17:30:20.685405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.566 [2024-12-07 17:30:20.685449] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:47.566 BaseBdev4 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.566 spare_malloc 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.566 spare_delay 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.566 [2024-12-07 17:30:20.750513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:47.566 [2024-12-07 17:30:20.750563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.566 [2024-12-07 17:30:20.750581] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:47.566 [2024-12-07 17:30:20.750591] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.566 [2024-12-07 17:30:20.752562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.566 [2024-12-07 17:30:20.752657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:47.566 spare 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.566 [2024-12-07 17:30:20.762541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.566 [2024-12-07 17:30:20.764298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.566 [2024-12-07 17:30:20.764361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.566 [2024-12-07 17:30:20.764411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:47.566 [2024-12-07 17:30:20.764601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:47.566 [2024-12-07 17:30:20.764616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.566 [2024-12-07 17:30:20.764851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:47.566 [2024-12-07 17:30:20.765030] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:47.566 [2024-12-07 17:30:20.765041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:47.566 [2024-12-07 17:30:20.765178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.566 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.566 "name": "raid_bdev1", 00:13:47.566 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:47.566 "strip_size_kb": 0, 00:13:47.566 "state": "online", 00:13:47.566 "raid_level": "raid1", 00:13:47.567 "superblock": true, 00:13:47.567 "num_base_bdevs": 4, 00:13:47.567 "num_base_bdevs_discovered": 4, 00:13:47.567 "num_base_bdevs_operational": 4, 00:13:47.567 "base_bdevs_list": [ 00:13:47.567 { 00:13:47.567 "name": "BaseBdev1", 00:13:47.567 "uuid": "13f40fdc-54f9-5457-81e3-cdd26f43b736", 00:13:47.567 "is_configured": true, 00:13:47.567 "data_offset": 2048, 00:13:47.567 "data_size": 63488 00:13:47.567 }, 00:13:47.567 { 00:13:47.567 "name": "BaseBdev2", 00:13:47.567 "uuid": "2d772cea-f81d-5e07-90bc-4c98167c3dc6", 00:13:47.567 "is_configured": true, 00:13:47.567 "data_offset": 2048, 00:13:47.567 "data_size": 63488 00:13:47.567 }, 00:13:47.567 { 00:13:47.567 "name": "BaseBdev3", 00:13:47.567 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:47.567 "is_configured": true, 00:13:47.567 "data_offset": 2048, 00:13:47.567 "data_size": 63488 00:13:47.567 }, 00:13:47.567 { 00:13:47.567 "name": "BaseBdev4", 00:13:47.567 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:47.567 "is_configured": true, 00:13:47.567 "data_offset": 2048, 00:13:47.567 "data_size": 63488 00:13:47.567 } 00:13:47.567 ] 00:13:47.567 }' 00:13:47.567 17:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.567 17:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.833 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:47.833 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.833 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.833 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.833 [2024-12-07 17:30:21.202173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:48.133 [2024-12-07 17:30:21.445452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:48.133 /dev/nbd0 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.133 1+0 records in 00:13:48.133 1+0 records out 00:13:48.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252385 s, 16.2 MB/s 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:48.133 17:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:53.421 63488+0 records in 00:13:53.421 63488+0 records out 00:13:53.421 32505856 bytes (33 MB, 31 MiB) copied, 5.17558 s, 6.3 MB/s 00:13:53.421 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:53.421 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.421 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:53.421 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:53.421 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:53.421 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.421 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:53.680 [2024-12-07 17:30:26.872558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.680 [2024-12-07 17:30:26.905105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.680 "name": "raid_bdev1", 00:13:53.680 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:53.680 "strip_size_kb": 0, 00:13:53.680 "state": "online", 00:13:53.680 "raid_level": "raid1", 00:13:53.680 "superblock": true, 00:13:53.680 "num_base_bdevs": 4, 00:13:53.680 "num_base_bdevs_discovered": 3, 00:13:53.680 "num_base_bdevs_operational": 3, 00:13:53.680 "base_bdevs_list": [ 00:13:53.680 { 00:13:53.680 "name": null, 00:13:53.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.680 "is_configured": false, 00:13:53.680 "data_offset": 0, 00:13:53.680 "data_size": 63488 00:13:53.680 }, 00:13:53.680 { 00:13:53.680 "name": "BaseBdev2", 00:13:53.680 "uuid": "2d772cea-f81d-5e07-90bc-4c98167c3dc6", 00:13:53.680 "is_configured": true, 00:13:53.680 "data_offset": 2048, 00:13:53.680 "data_size": 63488 00:13:53.680 }, 00:13:53.680 { 00:13:53.680 "name": "BaseBdev3", 00:13:53.680 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:53.680 "is_configured": true, 00:13:53.680 "data_offset": 2048, 00:13:53.680 "data_size": 63488 00:13:53.680 }, 00:13:53.680 { 00:13:53.680 "name": "BaseBdev4", 00:13:53.680 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:53.680 "is_configured": true, 00:13:53.680 "data_offset": 2048, 00:13:53.680 "data_size": 63488 00:13:53.680 } 00:13:53.680 ] 00:13:53.680 }' 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.680 17:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.248 17:30:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.248 17:30:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.248 17:30:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.248 [2024-12-07 17:30:27.348353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.248 [2024-12-07 17:30:27.362752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:54.248 17:30:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.248 17:30:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:54.248 [2024-12-07 17:30:27.364708] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.186 "name": "raid_bdev1", 00:13:55.186 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:55.186 "strip_size_kb": 0, 00:13:55.186 "state": "online", 00:13:55.186 "raid_level": "raid1", 00:13:55.186 "superblock": true, 00:13:55.186 "num_base_bdevs": 4, 00:13:55.186 "num_base_bdevs_discovered": 4, 00:13:55.186 "num_base_bdevs_operational": 4, 00:13:55.186 "process": { 00:13:55.186 "type": "rebuild", 00:13:55.186 "target": "spare", 00:13:55.186 "progress": { 00:13:55.186 "blocks": 20480, 00:13:55.186 "percent": 32 00:13:55.186 } 00:13:55.186 }, 00:13:55.186 "base_bdevs_list": [ 00:13:55.186 { 00:13:55.186 "name": "spare", 00:13:55.186 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:13:55.186 "is_configured": true, 00:13:55.186 "data_offset": 2048, 00:13:55.186 "data_size": 63488 00:13:55.186 }, 00:13:55.186 { 00:13:55.186 "name": "BaseBdev2", 00:13:55.186 "uuid": "2d772cea-f81d-5e07-90bc-4c98167c3dc6", 00:13:55.186 "is_configured": true, 00:13:55.186 "data_offset": 2048, 00:13:55.186 "data_size": 63488 00:13:55.186 }, 00:13:55.186 { 00:13:55.186 "name": "BaseBdev3", 00:13:55.186 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:55.186 "is_configured": true, 00:13:55.186 "data_offset": 2048, 00:13:55.186 "data_size": 63488 00:13:55.186 }, 00:13:55.186 { 00:13:55.186 "name": "BaseBdev4", 00:13:55.186 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:55.186 "is_configured": true, 00:13:55.186 "data_offset": 2048, 00:13:55.186 "data_size": 63488 00:13:55.186 } 00:13:55.186 ] 00:13:55.186 }' 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.186 17:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.186 [2024-12-07 17:30:28.528461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.446 [2024-12-07 17:30:28.570078] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:55.446 [2024-12-07 17:30:28.570141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.446 [2024-12-07 17:30:28.570159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.446 [2024-12-07 17:30:28.570169] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.446 "name": "raid_bdev1", 00:13:55.446 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:55.446 "strip_size_kb": 0, 00:13:55.446 "state": "online", 00:13:55.446 "raid_level": "raid1", 00:13:55.446 "superblock": true, 00:13:55.446 "num_base_bdevs": 4, 00:13:55.446 "num_base_bdevs_discovered": 3, 00:13:55.446 "num_base_bdevs_operational": 3, 00:13:55.446 "base_bdevs_list": [ 00:13:55.446 { 00:13:55.446 "name": null, 00:13:55.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.446 "is_configured": false, 00:13:55.446 "data_offset": 0, 00:13:55.446 "data_size": 63488 00:13:55.446 }, 00:13:55.446 { 00:13:55.446 "name": "BaseBdev2", 00:13:55.446 "uuid": "2d772cea-f81d-5e07-90bc-4c98167c3dc6", 00:13:55.446 "is_configured": true, 00:13:55.446 "data_offset": 2048, 00:13:55.446 "data_size": 63488 00:13:55.446 }, 00:13:55.446 { 00:13:55.446 "name": "BaseBdev3", 00:13:55.446 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:55.446 "is_configured": true, 00:13:55.446 "data_offset": 2048, 00:13:55.446 "data_size": 63488 00:13:55.446 }, 00:13:55.446 { 00:13:55.446 "name": "BaseBdev4", 00:13:55.446 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:55.446 "is_configured": true, 00:13:55.446 "data_offset": 2048, 00:13:55.446 "data_size": 63488 00:13:55.446 } 00:13:55.446 ] 00:13:55.446 }' 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.446 17:30:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.706 "name": "raid_bdev1", 00:13:55.706 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:55.706 "strip_size_kb": 0, 00:13:55.706 "state": "online", 00:13:55.706 "raid_level": "raid1", 00:13:55.706 "superblock": true, 00:13:55.706 "num_base_bdevs": 4, 00:13:55.706 "num_base_bdevs_discovered": 3, 00:13:55.706 "num_base_bdevs_operational": 3, 00:13:55.706 "base_bdevs_list": [ 00:13:55.706 { 00:13:55.706 "name": null, 00:13:55.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.706 "is_configured": false, 00:13:55.706 "data_offset": 0, 00:13:55.706 "data_size": 63488 00:13:55.706 }, 00:13:55.706 { 00:13:55.706 "name": "BaseBdev2", 00:13:55.706 "uuid": "2d772cea-f81d-5e07-90bc-4c98167c3dc6", 00:13:55.706 "is_configured": true, 00:13:55.706 "data_offset": 2048, 00:13:55.706 "data_size": 63488 00:13:55.706 }, 00:13:55.706 { 00:13:55.706 "name": "BaseBdev3", 00:13:55.706 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:55.706 "is_configured": true, 00:13:55.706 "data_offset": 2048, 00:13:55.706 "data_size": 63488 00:13:55.706 }, 00:13:55.706 { 00:13:55.706 "name": "BaseBdev4", 00:13:55.706 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:55.706 "is_configured": true, 00:13:55.706 "data_offset": 2048, 00:13:55.706 "data_size": 63488 00:13:55.706 } 00:13:55.706 ] 00:13:55.706 }' 00:13:55.706 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.965 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:55.965 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.965 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.965 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:55.965 17:30:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.965 17:30:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.965 [2024-12-07 17:30:29.190866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.965 [2024-12-07 17:30:29.204167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:55.965 17:30:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.965 17:30:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:55.966 [2024-12-07 17:30:29.206102] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.977 "name": "raid_bdev1", 00:13:56.977 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:56.977 "strip_size_kb": 0, 00:13:56.977 "state": "online", 00:13:56.977 "raid_level": "raid1", 00:13:56.977 "superblock": true, 00:13:56.977 "num_base_bdevs": 4, 00:13:56.977 "num_base_bdevs_discovered": 4, 00:13:56.977 "num_base_bdevs_operational": 4, 00:13:56.977 "process": { 00:13:56.977 "type": "rebuild", 00:13:56.977 "target": "spare", 00:13:56.977 "progress": { 00:13:56.977 "blocks": 20480, 00:13:56.977 "percent": 32 00:13:56.977 } 00:13:56.977 }, 00:13:56.977 "base_bdevs_list": [ 00:13:56.977 { 00:13:56.977 "name": "spare", 00:13:56.977 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:13:56.977 "is_configured": true, 00:13:56.977 "data_offset": 2048, 00:13:56.977 "data_size": 63488 00:13:56.977 }, 00:13:56.977 { 00:13:56.977 "name": "BaseBdev2", 00:13:56.977 "uuid": "2d772cea-f81d-5e07-90bc-4c98167c3dc6", 00:13:56.977 "is_configured": true, 00:13:56.977 "data_offset": 2048, 00:13:56.977 "data_size": 63488 00:13:56.977 }, 00:13:56.977 { 00:13:56.977 "name": "BaseBdev3", 00:13:56.977 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:56.977 "is_configured": true, 00:13:56.977 "data_offset": 2048, 00:13:56.977 "data_size": 63488 00:13:56.977 }, 00:13:56.977 { 00:13:56.977 "name": "BaseBdev4", 00:13:56.977 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:56.977 "is_configured": true, 00:13:56.977 "data_offset": 2048, 00:13:56.977 "data_size": 63488 00:13:56.977 } 00:13:56.977 ] 00:13:56.977 }' 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:56.977 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:56.977 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:56.978 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:56.978 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:56.978 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.978 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.978 [2024-12-07 17:30:30.349454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.238 [2024-12-07 17:30:30.510955] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.238 "name": "raid_bdev1", 00:13:57.238 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:57.238 "strip_size_kb": 0, 00:13:57.238 "state": "online", 00:13:57.238 "raid_level": "raid1", 00:13:57.238 "superblock": true, 00:13:57.238 "num_base_bdevs": 4, 00:13:57.238 "num_base_bdevs_discovered": 3, 00:13:57.238 "num_base_bdevs_operational": 3, 00:13:57.238 "process": { 00:13:57.238 "type": "rebuild", 00:13:57.238 "target": "spare", 00:13:57.238 "progress": { 00:13:57.238 "blocks": 24576, 00:13:57.238 "percent": 38 00:13:57.238 } 00:13:57.238 }, 00:13:57.238 "base_bdevs_list": [ 00:13:57.238 { 00:13:57.238 "name": "spare", 00:13:57.238 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:13:57.238 "is_configured": true, 00:13:57.238 "data_offset": 2048, 00:13:57.238 "data_size": 63488 00:13:57.238 }, 00:13:57.238 { 00:13:57.238 "name": null, 00:13:57.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.238 "is_configured": false, 00:13:57.238 "data_offset": 0, 00:13:57.238 "data_size": 63488 00:13:57.238 }, 00:13:57.238 { 00:13:57.238 "name": "BaseBdev3", 00:13:57.238 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:57.238 "is_configured": true, 00:13:57.238 "data_offset": 2048, 00:13:57.238 "data_size": 63488 00:13:57.238 }, 00:13:57.238 { 00:13:57.238 "name": "BaseBdev4", 00:13:57.238 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:57.238 "is_configured": true, 00:13:57.238 "data_offset": 2048, 00:13:57.238 "data_size": 63488 00:13:57.238 } 00:13:57.238 ] 00:13:57.238 }' 00:13:57.238 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=461 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.498 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.498 "name": "raid_bdev1", 00:13:57.499 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:57.499 "strip_size_kb": 0, 00:13:57.499 "state": "online", 00:13:57.499 "raid_level": "raid1", 00:13:57.499 "superblock": true, 00:13:57.499 "num_base_bdevs": 4, 00:13:57.499 "num_base_bdevs_discovered": 3, 00:13:57.499 "num_base_bdevs_operational": 3, 00:13:57.499 "process": { 00:13:57.499 "type": "rebuild", 00:13:57.499 "target": "spare", 00:13:57.499 "progress": { 00:13:57.499 "blocks": 26624, 00:13:57.499 "percent": 41 00:13:57.499 } 00:13:57.499 }, 00:13:57.499 "base_bdevs_list": [ 00:13:57.499 { 00:13:57.499 "name": "spare", 00:13:57.499 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:13:57.499 "is_configured": true, 00:13:57.499 "data_offset": 2048, 00:13:57.499 "data_size": 63488 00:13:57.499 }, 00:13:57.499 { 00:13:57.499 "name": null, 00:13:57.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.499 "is_configured": false, 00:13:57.499 "data_offset": 0, 00:13:57.499 "data_size": 63488 00:13:57.499 }, 00:13:57.499 { 00:13:57.499 "name": "BaseBdev3", 00:13:57.499 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:57.499 "is_configured": true, 00:13:57.499 "data_offset": 2048, 00:13:57.499 "data_size": 63488 00:13:57.499 }, 00:13:57.499 { 00:13:57.499 "name": "BaseBdev4", 00:13:57.499 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:57.499 "is_configured": true, 00:13:57.499 "data_offset": 2048, 00:13:57.499 "data_size": 63488 00:13:57.499 } 00:13:57.499 ] 00:13:57.499 }' 00:13:57.499 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.499 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.499 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.499 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.499 17:30:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.440 17:30:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.700 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.700 "name": "raid_bdev1", 00:13:58.700 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:58.700 "strip_size_kb": 0, 00:13:58.700 "state": "online", 00:13:58.700 "raid_level": "raid1", 00:13:58.700 "superblock": true, 00:13:58.700 "num_base_bdevs": 4, 00:13:58.700 "num_base_bdevs_discovered": 3, 00:13:58.700 "num_base_bdevs_operational": 3, 00:13:58.700 "process": { 00:13:58.700 "type": "rebuild", 00:13:58.700 "target": "spare", 00:13:58.700 "progress": { 00:13:58.700 "blocks": 49152, 00:13:58.700 "percent": 77 00:13:58.700 } 00:13:58.700 }, 00:13:58.700 "base_bdevs_list": [ 00:13:58.700 { 00:13:58.700 "name": "spare", 00:13:58.700 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:13:58.700 "is_configured": true, 00:13:58.700 "data_offset": 2048, 00:13:58.700 "data_size": 63488 00:13:58.700 }, 00:13:58.700 { 00:13:58.700 "name": null, 00:13:58.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.700 "is_configured": false, 00:13:58.700 "data_offset": 0, 00:13:58.700 "data_size": 63488 00:13:58.700 }, 00:13:58.700 { 00:13:58.700 "name": "BaseBdev3", 00:13:58.700 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:58.700 "is_configured": true, 00:13:58.700 "data_offset": 2048, 00:13:58.700 "data_size": 63488 00:13:58.700 }, 00:13:58.700 { 00:13:58.700 "name": "BaseBdev4", 00:13:58.700 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:58.700 "is_configured": true, 00:13:58.700 "data_offset": 2048, 00:13:58.700 "data_size": 63488 00:13:58.700 } 00:13:58.700 ] 00:13:58.700 }' 00:13:58.700 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.700 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.700 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.700 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.700 17:30:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.271 [2024-12-07 17:30:32.418370] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:59.271 [2024-12-07 17:30:32.418491] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:59.271 [2024-12-07 17:30:32.418651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.841 "name": "raid_bdev1", 00:13:59.841 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:59.841 "strip_size_kb": 0, 00:13:59.841 "state": "online", 00:13:59.841 "raid_level": "raid1", 00:13:59.841 "superblock": true, 00:13:59.841 "num_base_bdevs": 4, 00:13:59.841 "num_base_bdevs_discovered": 3, 00:13:59.841 "num_base_bdevs_operational": 3, 00:13:59.841 "base_bdevs_list": [ 00:13:59.841 { 00:13:59.841 "name": "spare", 00:13:59.841 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:13:59.841 "is_configured": true, 00:13:59.841 "data_offset": 2048, 00:13:59.841 "data_size": 63488 00:13:59.841 }, 00:13:59.841 { 00:13:59.841 "name": null, 00:13:59.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.841 "is_configured": false, 00:13:59.841 "data_offset": 0, 00:13:59.841 "data_size": 63488 00:13:59.841 }, 00:13:59.841 { 00:13:59.841 "name": "BaseBdev3", 00:13:59.841 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:59.841 "is_configured": true, 00:13:59.841 "data_offset": 2048, 00:13:59.841 "data_size": 63488 00:13:59.841 }, 00:13:59.841 { 00:13:59.841 "name": "BaseBdev4", 00:13:59.841 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:59.841 "is_configured": true, 00:13:59.841 "data_offset": 2048, 00:13:59.841 "data_size": 63488 00:13:59.841 } 00:13:59.841 ] 00:13:59.841 }' 00:13:59.841 17:30:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.841 "name": "raid_bdev1", 00:13:59.841 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:13:59.841 "strip_size_kb": 0, 00:13:59.841 "state": "online", 00:13:59.841 "raid_level": "raid1", 00:13:59.841 "superblock": true, 00:13:59.841 "num_base_bdevs": 4, 00:13:59.841 "num_base_bdevs_discovered": 3, 00:13:59.841 "num_base_bdevs_operational": 3, 00:13:59.841 "base_bdevs_list": [ 00:13:59.841 { 00:13:59.841 "name": "spare", 00:13:59.841 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:13:59.841 "is_configured": true, 00:13:59.841 "data_offset": 2048, 00:13:59.841 "data_size": 63488 00:13:59.841 }, 00:13:59.841 { 00:13:59.841 "name": null, 00:13:59.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.841 "is_configured": false, 00:13:59.841 "data_offset": 0, 00:13:59.841 "data_size": 63488 00:13:59.841 }, 00:13:59.841 { 00:13:59.841 "name": "BaseBdev3", 00:13:59.841 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:13:59.841 "is_configured": true, 00:13:59.841 "data_offset": 2048, 00:13:59.841 "data_size": 63488 00:13:59.841 }, 00:13:59.841 { 00:13:59.841 "name": "BaseBdev4", 00:13:59.841 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:13:59.841 "is_configured": true, 00:13:59.841 "data_offset": 2048, 00:13:59.841 "data_size": 63488 00:13:59.841 } 00:13:59.841 ] 00:13:59.841 }' 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.841 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.100 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.100 "name": "raid_bdev1", 00:14:00.100 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:00.100 "strip_size_kb": 0, 00:14:00.100 "state": "online", 00:14:00.100 "raid_level": "raid1", 00:14:00.100 "superblock": true, 00:14:00.100 "num_base_bdevs": 4, 00:14:00.100 "num_base_bdevs_discovered": 3, 00:14:00.100 "num_base_bdevs_operational": 3, 00:14:00.100 "base_bdevs_list": [ 00:14:00.100 { 00:14:00.100 "name": "spare", 00:14:00.100 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:14:00.100 "is_configured": true, 00:14:00.100 "data_offset": 2048, 00:14:00.100 "data_size": 63488 00:14:00.100 }, 00:14:00.100 { 00:14:00.100 "name": null, 00:14:00.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.100 "is_configured": false, 00:14:00.100 "data_offset": 0, 00:14:00.100 "data_size": 63488 00:14:00.100 }, 00:14:00.100 { 00:14:00.100 "name": "BaseBdev3", 00:14:00.100 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:00.100 "is_configured": true, 00:14:00.100 "data_offset": 2048, 00:14:00.100 "data_size": 63488 00:14:00.100 }, 00:14:00.100 { 00:14:00.100 "name": "BaseBdev4", 00:14:00.100 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:00.100 "is_configured": true, 00:14:00.100 "data_offset": 2048, 00:14:00.100 "data_size": 63488 00:14:00.100 } 00:14:00.100 ] 00:14:00.100 }' 00:14:00.100 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.100 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.360 [2024-12-07 17:30:33.669303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.360 [2024-12-07 17:30:33.669333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.360 [2024-12-07 17:30:33.669419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.360 [2024-12-07 17:30:33.669499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.360 [2024-12-07 17:30:33.669509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:00.360 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:00.620 /dev/nbd0 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.620 1+0 records in 00:14:00.620 1+0 records out 00:14:00.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047437 s, 8.6 MB/s 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:00.620 17:30:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:00.880 /dev/nbd1 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.880 1+0 records in 00:14:00.880 1+0 records out 00:14:00.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275533 s, 14.9 MB/s 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:00.880 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:01.141 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:01.141 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.141 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.141 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.141 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:01.141 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.141 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.401 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.401 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.401 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.401 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.401 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.401 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.401 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:01.401 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.401 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.401 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:01.661 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.662 [2024-12-07 17:30:34.836833] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:01.662 [2024-12-07 17:30:34.836890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.662 [2024-12-07 17:30:34.836912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:01.662 [2024-12-07 17:30:34.836921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.662 [2024-12-07 17:30:34.839037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.662 [2024-12-07 17:30:34.839073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:01.662 [2024-12-07 17:30:34.839164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:01.662 [2024-12-07 17:30:34.839215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.662 [2024-12-07 17:30:34.839384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:01.662 [2024-12-07 17:30:34.839492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:01.662 spare 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.662 [2024-12-07 17:30:34.939397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:01.662 [2024-12-07 17:30:34.939423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:01.662 [2024-12-07 17:30:34.939698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:01.662 [2024-12-07 17:30:34.939879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:01.662 [2024-12-07 17:30:34.939895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:01.662 [2024-12-07 17:30:34.940068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.662 "name": "raid_bdev1", 00:14:01.662 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:01.662 "strip_size_kb": 0, 00:14:01.662 "state": "online", 00:14:01.662 "raid_level": "raid1", 00:14:01.662 "superblock": true, 00:14:01.662 "num_base_bdevs": 4, 00:14:01.662 "num_base_bdevs_discovered": 3, 00:14:01.662 "num_base_bdevs_operational": 3, 00:14:01.662 "base_bdevs_list": [ 00:14:01.662 { 00:14:01.662 "name": "spare", 00:14:01.662 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:14:01.662 "is_configured": true, 00:14:01.662 "data_offset": 2048, 00:14:01.662 "data_size": 63488 00:14:01.662 }, 00:14:01.662 { 00:14:01.662 "name": null, 00:14:01.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.662 "is_configured": false, 00:14:01.662 "data_offset": 2048, 00:14:01.662 "data_size": 63488 00:14:01.662 }, 00:14:01.662 { 00:14:01.662 "name": "BaseBdev3", 00:14:01.662 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:01.662 "is_configured": true, 00:14:01.662 "data_offset": 2048, 00:14:01.662 "data_size": 63488 00:14:01.662 }, 00:14:01.662 { 00:14:01.662 "name": "BaseBdev4", 00:14:01.662 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:01.662 "is_configured": true, 00:14:01.662 "data_offset": 2048, 00:14:01.662 "data_size": 63488 00:14:01.662 } 00:14:01.662 ] 00:14:01.662 }' 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.662 17:30:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.230 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.231 "name": "raid_bdev1", 00:14:02.231 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:02.231 "strip_size_kb": 0, 00:14:02.231 "state": "online", 00:14:02.231 "raid_level": "raid1", 00:14:02.231 "superblock": true, 00:14:02.231 "num_base_bdevs": 4, 00:14:02.231 "num_base_bdevs_discovered": 3, 00:14:02.231 "num_base_bdevs_operational": 3, 00:14:02.231 "base_bdevs_list": [ 00:14:02.231 { 00:14:02.231 "name": "spare", 00:14:02.231 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:14:02.231 "is_configured": true, 00:14:02.231 "data_offset": 2048, 00:14:02.231 "data_size": 63488 00:14:02.231 }, 00:14:02.231 { 00:14:02.231 "name": null, 00:14:02.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.231 "is_configured": false, 00:14:02.231 "data_offset": 2048, 00:14:02.231 "data_size": 63488 00:14:02.231 }, 00:14:02.231 { 00:14:02.231 "name": "BaseBdev3", 00:14:02.231 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:02.231 "is_configured": true, 00:14:02.231 "data_offset": 2048, 00:14:02.231 "data_size": 63488 00:14:02.231 }, 00:14:02.231 { 00:14:02.231 "name": "BaseBdev4", 00:14:02.231 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:02.231 "is_configured": true, 00:14:02.231 "data_offset": 2048, 00:14:02.231 "data_size": 63488 00:14:02.231 } 00:14:02.231 ] 00:14:02.231 }' 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.231 [2024-12-07 17:30:35.575607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.231 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.490 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.490 "name": "raid_bdev1", 00:14:02.490 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:02.490 "strip_size_kb": 0, 00:14:02.490 "state": "online", 00:14:02.490 "raid_level": "raid1", 00:14:02.490 "superblock": true, 00:14:02.490 "num_base_bdevs": 4, 00:14:02.490 "num_base_bdevs_discovered": 2, 00:14:02.490 "num_base_bdevs_operational": 2, 00:14:02.490 "base_bdevs_list": [ 00:14:02.490 { 00:14:02.490 "name": null, 00:14:02.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.490 "is_configured": false, 00:14:02.490 "data_offset": 0, 00:14:02.490 "data_size": 63488 00:14:02.490 }, 00:14:02.490 { 00:14:02.490 "name": null, 00:14:02.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.490 "is_configured": false, 00:14:02.490 "data_offset": 2048, 00:14:02.490 "data_size": 63488 00:14:02.490 }, 00:14:02.490 { 00:14:02.490 "name": "BaseBdev3", 00:14:02.490 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:02.490 "is_configured": true, 00:14:02.490 "data_offset": 2048, 00:14:02.490 "data_size": 63488 00:14:02.490 }, 00:14:02.490 { 00:14:02.490 "name": "BaseBdev4", 00:14:02.490 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:02.490 "is_configured": true, 00:14:02.490 "data_offset": 2048, 00:14:02.490 "data_size": 63488 00:14:02.490 } 00:14:02.490 ] 00:14:02.490 }' 00:14:02.490 17:30:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.490 17:30:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.749 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.749 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.749 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.749 [2024-12-07 17:30:36.010922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.749 [2024-12-07 17:30:36.011149] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:02.749 [2024-12-07 17:30:36.011174] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:02.749 [2024-12-07 17:30:36.011211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.749 [2024-12-07 17:30:36.025721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:02.749 17:30:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.749 17:30:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:02.749 [2024-12-07 17:30:36.027536] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.688 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.688 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.688 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.688 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.688 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.688 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.688 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.688 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.688 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.688 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.949 "name": "raid_bdev1", 00:14:03.949 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:03.949 "strip_size_kb": 0, 00:14:03.949 "state": "online", 00:14:03.949 "raid_level": "raid1", 00:14:03.949 "superblock": true, 00:14:03.949 "num_base_bdevs": 4, 00:14:03.949 "num_base_bdevs_discovered": 3, 00:14:03.949 "num_base_bdevs_operational": 3, 00:14:03.949 "process": { 00:14:03.949 "type": "rebuild", 00:14:03.949 "target": "spare", 00:14:03.949 "progress": { 00:14:03.949 "blocks": 20480, 00:14:03.949 "percent": 32 00:14:03.949 } 00:14:03.949 }, 00:14:03.949 "base_bdevs_list": [ 00:14:03.949 { 00:14:03.949 "name": "spare", 00:14:03.949 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:14:03.949 "is_configured": true, 00:14:03.949 "data_offset": 2048, 00:14:03.949 "data_size": 63488 00:14:03.949 }, 00:14:03.949 { 00:14:03.949 "name": null, 00:14:03.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.949 "is_configured": false, 00:14:03.949 "data_offset": 2048, 00:14:03.949 "data_size": 63488 00:14:03.949 }, 00:14:03.949 { 00:14:03.949 "name": "BaseBdev3", 00:14:03.949 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:03.949 "is_configured": true, 00:14:03.949 "data_offset": 2048, 00:14:03.949 "data_size": 63488 00:14:03.949 }, 00:14:03.949 { 00:14:03.949 "name": "BaseBdev4", 00:14:03.949 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:03.949 "is_configured": true, 00:14:03.949 "data_offset": 2048, 00:14:03.949 "data_size": 63488 00:14:03.949 } 00:14:03.949 ] 00:14:03.949 }' 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.949 [2024-12-07 17:30:37.191442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.949 [2024-12-07 17:30:37.232177] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:03.949 [2024-12-07 17:30:37.232231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.949 [2024-12-07 17:30:37.232248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.949 [2024-12-07 17:30:37.232256] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.949 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.949 "name": "raid_bdev1", 00:14:03.949 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:03.949 "strip_size_kb": 0, 00:14:03.949 "state": "online", 00:14:03.949 "raid_level": "raid1", 00:14:03.949 "superblock": true, 00:14:03.949 "num_base_bdevs": 4, 00:14:03.949 "num_base_bdevs_discovered": 2, 00:14:03.949 "num_base_bdevs_operational": 2, 00:14:03.949 "base_bdevs_list": [ 00:14:03.949 { 00:14:03.949 "name": null, 00:14:03.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.949 "is_configured": false, 00:14:03.949 "data_offset": 0, 00:14:03.949 "data_size": 63488 00:14:03.949 }, 00:14:03.949 { 00:14:03.949 "name": null, 00:14:03.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.949 "is_configured": false, 00:14:03.949 "data_offset": 2048, 00:14:03.949 "data_size": 63488 00:14:03.949 }, 00:14:03.949 { 00:14:03.949 "name": "BaseBdev3", 00:14:03.949 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:03.949 "is_configured": true, 00:14:03.950 "data_offset": 2048, 00:14:03.950 "data_size": 63488 00:14:03.950 }, 00:14:03.950 { 00:14:03.950 "name": "BaseBdev4", 00:14:03.950 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:03.950 "is_configured": true, 00:14:03.950 "data_offset": 2048, 00:14:03.950 "data_size": 63488 00:14:03.950 } 00:14:03.950 ] 00:14:03.950 }' 00:14:03.950 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.950 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.521 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:04.521 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.521 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.521 [2024-12-07 17:30:37.704481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:04.521 [2024-12-07 17:30:37.704540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.521 [2024-12-07 17:30:37.704569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:04.521 [2024-12-07 17:30:37.704578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.521 [2024-12-07 17:30:37.705045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.521 [2024-12-07 17:30:37.705076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:04.521 [2024-12-07 17:30:37.705166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:04.521 [2024-12-07 17:30:37.705182] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:04.521 [2024-12-07 17:30:37.705197] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:04.521 [2024-12-07 17:30:37.705218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.521 [2024-12-07 17:30:37.718718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:04.521 spare 00:14:04.521 17:30:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.521 17:30:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:04.521 [2024-12-07 17:30:37.720535] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.462 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.462 "name": "raid_bdev1", 00:14:05.462 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:05.462 "strip_size_kb": 0, 00:14:05.462 "state": "online", 00:14:05.462 "raid_level": "raid1", 00:14:05.462 "superblock": true, 00:14:05.462 "num_base_bdevs": 4, 00:14:05.462 "num_base_bdevs_discovered": 3, 00:14:05.462 "num_base_bdevs_operational": 3, 00:14:05.462 "process": { 00:14:05.462 "type": "rebuild", 00:14:05.462 "target": "spare", 00:14:05.462 "progress": { 00:14:05.462 "blocks": 20480, 00:14:05.462 "percent": 32 00:14:05.462 } 00:14:05.462 }, 00:14:05.462 "base_bdevs_list": [ 00:14:05.462 { 00:14:05.462 "name": "spare", 00:14:05.462 "uuid": "85be4cb7-ded8-507b-972e-bccd40de33c8", 00:14:05.462 "is_configured": true, 00:14:05.462 "data_offset": 2048, 00:14:05.462 "data_size": 63488 00:14:05.462 }, 00:14:05.462 { 00:14:05.462 "name": null, 00:14:05.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.463 "is_configured": false, 00:14:05.463 "data_offset": 2048, 00:14:05.463 "data_size": 63488 00:14:05.463 }, 00:14:05.463 { 00:14:05.463 "name": "BaseBdev3", 00:14:05.463 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:05.463 "is_configured": true, 00:14:05.463 "data_offset": 2048, 00:14:05.463 "data_size": 63488 00:14:05.463 }, 00:14:05.463 { 00:14:05.463 "name": "BaseBdev4", 00:14:05.463 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:05.463 "is_configured": true, 00:14:05.463 "data_offset": 2048, 00:14:05.463 "data_size": 63488 00:14:05.463 } 00:14:05.463 ] 00:14:05.463 }' 00:14:05.463 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.463 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.463 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.723 [2024-12-07 17:30:38.876388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.723 [2024-12-07 17:30:38.925320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:05.723 [2024-12-07 17:30:38.925374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.723 [2024-12-07 17:30:38.925405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.723 [2024-12-07 17:30:38.925413] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.723 "name": "raid_bdev1", 00:14:05.723 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:05.723 "strip_size_kb": 0, 00:14:05.723 "state": "online", 00:14:05.723 "raid_level": "raid1", 00:14:05.723 "superblock": true, 00:14:05.723 "num_base_bdevs": 4, 00:14:05.723 "num_base_bdevs_discovered": 2, 00:14:05.723 "num_base_bdevs_operational": 2, 00:14:05.723 "base_bdevs_list": [ 00:14:05.723 { 00:14:05.723 "name": null, 00:14:05.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.723 "is_configured": false, 00:14:05.723 "data_offset": 0, 00:14:05.723 "data_size": 63488 00:14:05.723 }, 00:14:05.723 { 00:14:05.723 "name": null, 00:14:05.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.723 "is_configured": false, 00:14:05.723 "data_offset": 2048, 00:14:05.723 "data_size": 63488 00:14:05.723 }, 00:14:05.723 { 00:14:05.723 "name": "BaseBdev3", 00:14:05.723 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:05.723 "is_configured": true, 00:14:05.723 "data_offset": 2048, 00:14:05.723 "data_size": 63488 00:14:05.723 }, 00:14:05.723 { 00:14:05.723 "name": "BaseBdev4", 00:14:05.723 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:05.723 "is_configured": true, 00:14:05.723 "data_offset": 2048, 00:14:05.723 "data_size": 63488 00:14:05.723 } 00:14:05.723 ] 00:14:05.723 }' 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.723 17:30:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.294 "name": "raid_bdev1", 00:14:06.294 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:06.294 "strip_size_kb": 0, 00:14:06.294 "state": "online", 00:14:06.294 "raid_level": "raid1", 00:14:06.294 "superblock": true, 00:14:06.294 "num_base_bdevs": 4, 00:14:06.294 "num_base_bdevs_discovered": 2, 00:14:06.294 "num_base_bdevs_operational": 2, 00:14:06.294 "base_bdevs_list": [ 00:14:06.294 { 00:14:06.294 "name": null, 00:14:06.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.294 "is_configured": false, 00:14:06.294 "data_offset": 0, 00:14:06.294 "data_size": 63488 00:14:06.294 }, 00:14:06.294 { 00:14:06.294 "name": null, 00:14:06.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.294 "is_configured": false, 00:14:06.294 "data_offset": 2048, 00:14:06.294 "data_size": 63488 00:14:06.294 }, 00:14:06.294 { 00:14:06.294 "name": "BaseBdev3", 00:14:06.294 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:06.294 "is_configured": true, 00:14:06.294 "data_offset": 2048, 00:14:06.294 "data_size": 63488 00:14:06.294 }, 00:14:06.294 { 00:14:06.294 "name": "BaseBdev4", 00:14:06.294 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:06.294 "is_configured": true, 00:14:06.294 "data_offset": 2048, 00:14:06.294 "data_size": 63488 00:14:06.294 } 00:14:06.294 ] 00:14:06.294 }' 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.294 [2024-12-07 17:30:39.528356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.294 [2024-12-07 17:30:39.528410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.294 [2024-12-07 17:30:39.528429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:06.294 [2024-12-07 17:30:39.528439] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.294 [2024-12-07 17:30:39.528874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.294 [2024-12-07 17:30:39.528904] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.294 [2024-12-07 17:30:39.528989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:06.294 [2024-12-07 17:30:39.529010] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:06.294 [2024-12-07 17:30:39.529019] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:06.294 [2024-12-07 17:30:39.529043] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:06.294 BaseBdev1 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.294 17:30:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.235 "name": "raid_bdev1", 00:14:07.235 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:07.235 "strip_size_kb": 0, 00:14:07.235 "state": "online", 00:14:07.235 "raid_level": "raid1", 00:14:07.235 "superblock": true, 00:14:07.235 "num_base_bdevs": 4, 00:14:07.235 "num_base_bdevs_discovered": 2, 00:14:07.235 "num_base_bdevs_operational": 2, 00:14:07.235 "base_bdevs_list": [ 00:14:07.235 { 00:14:07.235 "name": null, 00:14:07.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.235 "is_configured": false, 00:14:07.235 "data_offset": 0, 00:14:07.235 "data_size": 63488 00:14:07.235 }, 00:14:07.235 { 00:14:07.235 "name": null, 00:14:07.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.235 "is_configured": false, 00:14:07.235 "data_offset": 2048, 00:14:07.235 "data_size": 63488 00:14:07.235 }, 00:14:07.235 { 00:14:07.235 "name": "BaseBdev3", 00:14:07.235 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:07.235 "is_configured": true, 00:14:07.235 "data_offset": 2048, 00:14:07.235 "data_size": 63488 00:14:07.235 }, 00:14:07.235 { 00:14:07.235 "name": "BaseBdev4", 00:14:07.235 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:07.235 "is_configured": true, 00:14:07.235 "data_offset": 2048, 00:14:07.235 "data_size": 63488 00:14:07.235 } 00:14:07.235 ] 00:14:07.235 }' 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.235 17:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.806 "name": "raid_bdev1", 00:14:07.806 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:07.806 "strip_size_kb": 0, 00:14:07.806 "state": "online", 00:14:07.806 "raid_level": "raid1", 00:14:07.806 "superblock": true, 00:14:07.806 "num_base_bdevs": 4, 00:14:07.806 "num_base_bdevs_discovered": 2, 00:14:07.806 "num_base_bdevs_operational": 2, 00:14:07.806 "base_bdevs_list": [ 00:14:07.806 { 00:14:07.806 "name": null, 00:14:07.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.806 "is_configured": false, 00:14:07.806 "data_offset": 0, 00:14:07.806 "data_size": 63488 00:14:07.806 }, 00:14:07.806 { 00:14:07.806 "name": null, 00:14:07.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.806 "is_configured": false, 00:14:07.806 "data_offset": 2048, 00:14:07.806 "data_size": 63488 00:14:07.806 }, 00:14:07.806 { 00:14:07.806 "name": "BaseBdev3", 00:14:07.806 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:07.806 "is_configured": true, 00:14:07.806 "data_offset": 2048, 00:14:07.806 "data_size": 63488 00:14:07.806 }, 00:14:07.806 { 00:14:07.806 "name": "BaseBdev4", 00:14:07.806 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:07.806 "is_configured": true, 00:14:07.806 "data_offset": 2048, 00:14:07.806 "data_size": 63488 00:14:07.806 } 00:14:07.806 ] 00:14:07.806 }' 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.806 [2024-12-07 17:30:41.153598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.806 [2024-12-07 17:30:41.153797] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:07.806 [2024-12-07 17:30:41.153820] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:07.806 request: 00:14:07.806 { 00:14:07.806 "base_bdev": "BaseBdev1", 00:14:07.806 "raid_bdev": "raid_bdev1", 00:14:07.806 "method": "bdev_raid_add_base_bdev", 00:14:07.806 "req_id": 1 00:14:07.806 } 00:14:07.806 Got JSON-RPC error response 00:14:07.806 response: 00:14:07.806 { 00:14:07.806 "code": -22, 00:14:07.806 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:07.806 } 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:07.806 17:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:09.187 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.187 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.187 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.187 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.187 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.187 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.187 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.188 "name": "raid_bdev1", 00:14:09.188 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:09.188 "strip_size_kb": 0, 00:14:09.188 "state": "online", 00:14:09.188 "raid_level": "raid1", 00:14:09.188 "superblock": true, 00:14:09.188 "num_base_bdevs": 4, 00:14:09.188 "num_base_bdevs_discovered": 2, 00:14:09.188 "num_base_bdevs_operational": 2, 00:14:09.188 "base_bdevs_list": [ 00:14:09.188 { 00:14:09.188 "name": null, 00:14:09.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.188 "is_configured": false, 00:14:09.188 "data_offset": 0, 00:14:09.188 "data_size": 63488 00:14:09.188 }, 00:14:09.188 { 00:14:09.188 "name": null, 00:14:09.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.188 "is_configured": false, 00:14:09.188 "data_offset": 2048, 00:14:09.188 "data_size": 63488 00:14:09.188 }, 00:14:09.188 { 00:14:09.188 "name": "BaseBdev3", 00:14:09.188 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:09.188 "is_configured": true, 00:14:09.188 "data_offset": 2048, 00:14:09.188 "data_size": 63488 00:14:09.188 }, 00:14:09.188 { 00:14:09.188 "name": "BaseBdev4", 00:14:09.188 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:09.188 "is_configured": true, 00:14:09.188 "data_offset": 2048, 00:14:09.188 "data_size": 63488 00:14:09.188 } 00:14:09.188 ] 00:14:09.188 }' 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.188 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.447 "name": "raid_bdev1", 00:14:09.447 "uuid": "2678fe94-481f-464c-a05d-889464e6c45f", 00:14:09.447 "strip_size_kb": 0, 00:14:09.447 "state": "online", 00:14:09.447 "raid_level": "raid1", 00:14:09.447 "superblock": true, 00:14:09.447 "num_base_bdevs": 4, 00:14:09.447 "num_base_bdevs_discovered": 2, 00:14:09.447 "num_base_bdevs_operational": 2, 00:14:09.447 "base_bdevs_list": [ 00:14:09.447 { 00:14:09.447 "name": null, 00:14:09.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.447 "is_configured": false, 00:14:09.447 "data_offset": 0, 00:14:09.447 "data_size": 63488 00:14:09.447 }, 00:14:09.447 { 00:14:09.447 "name": null, 00:14:09.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.447 "is_configured": false, 00:14:09.447 "data_offset": 2048, 00:14:09.447 "data_size": 63488 00:14:09.447 }, 00:14:09.447 { 00:14:09.447 "name": "BaseBdev3", 00:14:09.447 "uuid": "bd1998ce-7450-5a34-811f-057559aa6542", 00:14:09.447 "is_configured": true, 00:14:09.447 "data_offset": 2048, 00:14:09.447 "data_size": 63488 00:14:09.447 }, 00:14:09.447 { 00:14:09.447 "name": "BaseBdev4", 00:14:09.447 "uuid": "1f83ef23-2463-5459-86c4-9f38325004e5", 00:14:09.447 "is_configured": true, 00:14:09.447 "data_offset": 2048, 00:14:09.447 "data_size": 63488 00:14:09.447 } 00:14:09.447 ] 00:14:09.447 }' 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78001 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78001 ']' 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78001 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78001 00:14:09.447 killing process with pid 78001 00:14:09.447 Received shutdown signal, test time was about 60.000000 seconds 00:14:09.447 00:14:09.447 Latency(us) 00:14:09.447 [2024-12-07T17:30:42.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.447 [2024-12-07T17:30:42.829Z] =================================================================================================================== 00:14:09.447 [2024-12-07T17:30:42.829Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78001' 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78001 00:14:09.447 [2024-12-07 17:30:42.708192] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.447 [2024-12-07 17:30:42.708305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.447 17:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78001 00:14:09.447 [2024-12-07 17:30:42.708374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.447 [2024-12-07 17:30:42.708383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:10.015 [2024-12-07 17:30:43.171363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.952 ************************************ 00:14:10.952 END TEST raid_rebuild_test_sb 00:14:10.952 ************************************ 00:14:10.952 17:30:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:10.952 00:14:10.952 real 0m24.673s 00:14:10.952 user 0m29.895s 00:14:10.952 sys 0m3.687s 00:14:10.952 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.952 17:30:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.952 17:30:44 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:10.952 17:30:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:10.952 17:30:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.952 17:30:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.952 ************************************ 00:14:10.952 START TEST raid_rebuild_test_io 00:14:10.952 ************************************ 00:14:10.952 17:30:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:10.952 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:10.952 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:10.952 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:10.952 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:10.952 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78751 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78751 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78751 ']' 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.953 17:30:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.213 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:11.213 Zero copy mechanism will not be used. 00:14:11.213 [2024-12-07 17:30:44.408729] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:11.213 [2024-12-07 17:30:44.408847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78751 ] 00:14:11.213 [2024-12-07 17:30:44.586334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.479 [2024-12-07 17:30:44.695698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.748 [2024-12-07 17:30:44.890296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.748 [2024-12-07 17:30:44.890354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.011 BaseBdev1_malloc 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.011 [2024-12-07 17:30:45.258504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.011 [2024-12-07 17:30:45.258564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.011 [2024-12-07 17:30:45.258584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:12.011 [2024-12-07 17:30:45.258594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.011 [2024-12-07 17:30:45.260607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.011 [2024-12-07 17:30:45.260646] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.011 BaseBdev1 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.011 BaseBdev2_malloc 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.011 [2024-12-07 17:30:45.314499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:12.011 [2024-12-07 17:30:45.314558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.011 [2024-12-07 17:30:45.314582] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:12.011 [2024-12-07 17:30:45.314594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.011 [2024-12-07 17:30:45.316688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.011 [2024-12-07 17:30:45.316725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:12.011 BaseBdev2 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.011 BaseBdev3_malloc 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.011 [2024-12-07 17:30:45.376385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:12.011 [2024-12-07 17:30:45.376436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.011 [2024-12-07 17:30:45.376455] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:12.011 [2024-12-07 17:30:45.376466] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.011 [2024-12-07 17:30:45.378435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.011 [2024-12-07 17:30:45.378472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:12.011 BaseBdev3 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.011 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.269 BaseBdev4_malloc 00:14:12.269 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.269 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:12.269 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.269 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.269 [2024-12-07 17:30:45.429161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:12.269 [2024-12-07 17:30:45.429217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.269 [2024-12-07 17:30:45.429235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:12.269 [2024-12-07 17:30:45.429245] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.270 [2024-12-07 17:30:45.431199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.270 [2024-12-07 17:30:45.431237] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:12.270 BaseBdev4 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.270 spare_malloc 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.270 spare_delay 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.270 [2024-12-07 17:30:45.494291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.270 [2024-12-07 17:30:45.494354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.270 [2024-12-07 17:30:45.494369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:12.270 [2024-12-07 17:30:45.494379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.270 [2024-12-07 17:30:45.496362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.270 [2024-12-07 17:30:45.496401] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.270 spare 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.270 [2024-12-07 17:30:45.506307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.270 [2024-12-07 17:30:45.508061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.270 [2024-12-07 17:30:45.508125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.270 [2024-12-07 17:30:45.508173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.270 [2024-12-07 17:30:45.508246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:12.270 [2024-12-07 17:30:45.508265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:12.270 [2024-12-07 17:30:45.508514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:12.270 [2024-12-07 17:30:45.508692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:12.270 [2024-12-07 17:30:45.508712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:12.270 [2024-12-07 17:30:45.508858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.270 "name": "raid_bdev1", 00:14:12.270 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:12.270 "strip_size_kb": 0, 00:14:12.270 "state": "online", 00:14:12.270 "raid_level": "raid1", 00:14:12.270 "superblock": false, 00:14:12.270 "num_base_bdevs": 4, 00:14:12.270 "num_base_bdevs_discovered": 4, 00:14:12.270 "num_base_bdevs_operational": 4, 00:14:12.270 "base_bdevs_list": [ 00:14:12.270 { 00:14:12.270 "name": "BaseBdev1", 00:14:12.270 "uuid": "d854f1fa-5f2b-5f60-911a-94d5f271c20d", 00:14:12.270 "is_configured": true, 00:14:12.270 "data_offset": 0, 00:14:12.270 "data_size": 65536 00:14:12.270 }, 00:14:12.270 { 00:14:12.270 "name": "BaseBdev2", 00:14:12.270 "uuid": "6ca97d44-b405-5871-a4f9-6ca36df4c41b", 00:14:12.270 "is_configured": true, 00:14:12.270 "data_offset": 0, 00:14:12.270 "data_size": 65536 00:14:12.270 }, 00:14:12.270 { 00:14:12.270 "name": "BaseBdev3", 00:14:12.270 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:12.270 "is_configured": true, 00:14:12.270 "data_offset": 0, 00:14:12.270 "data_size": 65536 00:14:12.270 }, 00:14:12.270 { 00:14:12.270 "name": "BaseBdev4", 00:14:12.270 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:12.270 "is_configured": true, 00:14:12.270 "data_offset": 0, 00:14:12.270 "data_size": 65536 00:14:12.270 } 00:14:12.270 ] 00:14:12.270 }' 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.270 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.837 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.837 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.837 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.837 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:12.837 [2024-12-07 17:30:45.945840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.837 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.837 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:12.837 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.837 17:30:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.837 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.837 17:30:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.837 [2024-12-07 17:30:46.029363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.837 "name": "raid_bdev1", 00:14:12.837 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:12.837 "strip_size_kb": 0, 00:14:12.837 "state": "online", 00:14:12.837 "raid_level": "raid1", 00:14:12.837 "superblock": false, 00:14:12.837 "num_base_bdevs": 4, 00:14:12.837 "num_base_bdevs_discovered": 3, 00:14:12.837 "num_base_bdevs_operational": 3, 00:14:12.837 "base_bdevs_list": [ 00:14:12.837 { 00:14:12.837 "name": null, 00:14:12.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.837 "is_configured": false, 00:14:12.837 "data_offset": 0, 00:14:12.837 "data_size": 65536 00:14:12.837 }, 00:14:12.837 { 00:14:12.837 "name": "BaseBdev2", 00:14:12.837 "uuid": "6ca97d44-b405-5871-a4f9-6ca36df4c41b", 00:14:12.837 "is_configured": true, 00:14:12.837 "data_offset": 0, 00:14:12.837 "data_size": 65536 00:14:12.837 }, 00:14:12.837 { 00:14:12.837 "name": "BaseBdev3", 00:14:12.837 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:12.837 "is_configured": true, 00:14:12.837 "data_offset": 0, 00:14:12.837 "data_size": 65536 00:14:12.837 }, 00:14:12.837 { 00:14:12.837 "name": "BaseBdev4", 00:14:12.837 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:12.837 "is_configured": true, 00:14:12.837 "data_offset": 0, 00:14:12.837 "data_size": 65536 00:14:12.837 } 00:14:12.837 ] 00:14:12.837 }' 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.837 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.837 [2024-12-07 17:30:46.120757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:12.837 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.837 Zero copy mechanism will not be used. 00:14:12.837 Running I/O for 60 seconds... 00:14:13.405 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.405 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.405 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.405 [2024-12-07 17:30:46.485577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.405 17:30:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.405 17:30:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:13.405 [2024-12-07 17:30:46.544665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:13.405 [2024-12-07 17:30:46.546623] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:13.405 [2024-12-07 17:30:46.655008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:13.405 [2024-12-07 17:30:46.655584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:13.405 [2024-12-07 17:30:46.779624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:13.405 [2024-12-07 17:30:46.780328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:13.974 180.00 IOPS, 540.00 MiB/s [2024-12-07T17:30:47.356Z] [2024-12-07 17:30:47.135241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:13.974 [2024-12-07 17:30:47.351793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:13.974 [2024-12-07 17:30:47.352093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.235 "name": "raid_bdev1", 00:14:14.235 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:14.235 "strip_size_kb": 0, 00:14:14.235 "state": "online", 00:14:14.235 "raid_level": "raid1", 00:14:14.235 "superblock": false, 00:14:14.235 "num_base_bdevs": 4, 00:14:14.235 "num_base_bdevs_discovered": 4, 00:14:14.235 "num_base_bdevs_operational": 4, 00:14:14.235 "process": { 00:14:14.235 "type": "rebuild", 00:14:14.235 "target": "spare", 00:14:14.235 "progress": { 00:14:14.235 "blocks": 10240, 00:14:14.235 "percent": 15 00:14:14.235 } 00:14:14.235 }, 00:14:14.235 "base_bdevs_list": [ 00:14:14.235 { 00:14:14.235 "name": "spare", 00:14:14.235 "uuid": "05c6df8e-23fc-5266-8632-e86db35bf5c3", 00:14:14.235 "is_configured": true, 00:14:14.235 "data_offset": 0, 00:14:14.235 "data_size": 65536 00:14:14.235 }, 00:14:14.235 { 00:14:14.235 "name": "BaseBdev2", 00:14:14.235 "uuid": "6ca97d44-b405-5871-a4f9-6ca36df4c41b", 00:14:14.235 "is_configured": true, 00:14:14.235 "data_offset": 0, 00:14:14.235 "data_size": 65536 00:14:14.235 }, 00:14:14.235 { 00:14:14.235 "name": "BaseBdev3", 00:14:14.235 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:14.235 "is_configured": true, 00:14:14.235 "data_offset": 0, 00:14:14.235 "data_size": 65536 00:14:14.235 }, 00:14:14.235 { 00:14:14.235 "name": "BaseBdev4", 00:14:14.235 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:14.235 "is_configured": true, 00:14:14.235 "data_offset": 0, 00:14:14.235 "data_size": 65536 00:14:14.235 } 00:14:14.235 ] 00:14:14.235 }' 00:14:14.235 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.496 [2024-12-07 17:30:47.672764] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.496 [2024-12-07 17:30:47.698151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.496 [2024-12-07 17:30:47.803148] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.496 [2024-12-07 17:30:47.806703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.496 [2024-12-07 17:30:47.806797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.496 [2024-12-07 17:30:47.806827] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.496 [2024-12-07 17:30:47.823565] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.496 17:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.756 17:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.756 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.756 "name": "raid_bdev1", 00:14:14.756 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:14.756 "strip_size_kb": 0, 00:14:14.756 "state": "online", 00:14:14.756 "raid_level": "raid1", 00:14:14.756 "superblock": false, 00:14:14.756 "num_base_bdevs": 4, 00:14:14.756 "num_base_bdevs_discovered": 3, 00:14:14.756 "num_base_bdevs_operational": 3, 00:14:14.756 "base_bdevs_list": [ 00:14:14.756 { 00:14:14.756 "name": null, 00:14:14.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.756 "is_configured": false, 00:14:14.756 "data_offset": 0, 00:14:14.756 "data_size": 65536 00:14:14.756 }, 00:14:14.756 { 00:14:14.756 "name": "BaseBdev2", 00:14:14.756 "uuid": "6ca97d44-b405-5871-a4f9-6ca36df4c41b", 00:14:14.756 "is_configured": true, 00:14:14.756 "data_offset": 0, 00:14:14.756 "data_size": 65536 00:14:14.756 }, 00:14:14.756 { 00:14:14.756 "name": "BaseBdev3", 00:14:14.756 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:14.756 "is_configured": true, 00:14:14.756 "data_offset": 0, 00:14:14.756 "data_size": 65536 00:14:14.756 }, 00:14:14.756 { 00:14:14.756 "name": "BaseBdev4", 00:14:14.756 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:14.756 "is_configured": true, 00:14:14.756 "data_offset": 0, 00:14:14.756 "data_size": 65536 00:14:14.756 } 00:14:14.756 ] 00:14:14.756 }' 00:14:14.756 17:30:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.756 17:30:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.017 157.00 IOPS, 471.00 MiB/s [2024-12-07T17:30:48.399Z] 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.017 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.018 "name": "raid_bdev1", 00:14:15.018 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:15.018 "strip_size_kb": 0, 00:14:15.018 "state": "online", 00:14:15.018 "raid_level": "raid1", 00:14:15.018 "superblock": false, 00:14:15.018 "num_base_bdevs": 4, 00:14:15.018 "num_base_bdevs_discovered": 3, 00:14:15.018 "num_base_bdevs_operational": 3, 00:14:15.018 "base_bdevs_list": [ 00:14:15.018 { 00:14:15.018 "name": null, 00:14:15.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.018 "is_configured": false, 00:14:15.018 "data_offset": 0, 00:14:15.018 "data_size": 65536 00:14:15.018 }, 00:14:15.018 { 00:14:15.018 "name": "BaseBdev2", 00:14:15.018 "uuid": "6ca97d44-b405-5871-a4f9-6ca36df4c41b", 00:14:15.018 "is_configured": true, 00:14:15.018 "data_offset": 0, 00:14:15.018 "data_size": 65536 00:14:15.018 }, 00:14:15.018 { 00:14:15.018 "name": "BaseBdev3", 00:14:15.018 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:15.018 "is_configured": true, 00:14:15.018 "data_offset": 0, 00:14:15.018 "data_size": 65536 00:14:15.018 }, 00:14:15.018 { 00:14:15.018 "name": "BaseBdev4", 00:14:15.018 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:15.018 "is_configured": true, 00:14:15.018 "data_offset": 0, 00:14:15.018 "data_size": 65536 00:14:15.018 } 00:14:15.018 ] 00:14:15.018 }' 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.018 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.278 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.278 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.278 17:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.278 17:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.278 [2024-12-07 17:30:48.418801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.278 17:30:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.278 17:30:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:15.278 [2024-12-07 17:30:48.483372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:15.278 [2024-12-07 17:30:48.485287] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.278 [2024-12-07 17:30:48.593455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.278 [2024-12-07 17:30:48.594159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.538 [2024-12-07 17:30:48.704713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.538 [2024-12-07 17:30:48.705070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.798 [2024-12-07 17:30:48.958927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:16.058 155.33 IOPS, 466.00 MiB/s [2024-12-07T17:30:49.440Z] [2024-12-07 17:30:49.188940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.319 "name": "raid_bdev1", 00:14:16.319 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:16.319 "strip_size_kb": 0, 00:14:16.319 "state": "online", 00:14:16.319 "raid_level": "raid1", 00:14:16.319 "superblock": false, 00:14:16.319 "num_base_bdevs": 4, 00:14:16.319 "num_base_bdevs_discovered": 4, 00:14:16.319 "num_base_bdevs_operational": 4, 00:14:16.319 "process": { 00:14:16.319 "type": "rebuild", 00:14:16.319 "target": "spare", 00:14:16.319 "progress": { 00:14:16.319 "blocks": 12288, 00:14:16.319 "percent": 18 00:14:16.319 } 00:14:16.319 }, 00:14:16.319 "base_bdevs_list": [ 00:14:16.319 { 00:14:16.319 "name": "spare", 00:14:16.319 "uuid": "05c6df8e-23fc-5266-8632-e86db35bf5c3", 00:14:16.319 "is_configured": true, 00:14:16.319 "data_offset": 0, 00:14:16.319 "data_size": 65536 00:14:16.319 }, 00:14:16.319 { 00:14:16.319 "name": "BaseBdev2", 00:14:16.319 "uuid": "6ca97d44-b405-5871-a4f9-6ca36df4c41b", 00:14:16.319 "is_configured": true, 00:14:16.319 "data_offset": 0, 00:14:16.319 "data_size": 65536 00:14:16.319 }, 00:14:16.319 { 00:14:16.319 "name": "BaseBdev3", 00:14:16.319 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:16.319 "is_configured": true, 00:14:16.319 "data_offset": 0, 00:14:16.319 "data_size": 65536 00:14:16.319 }, 00:14:16.319 { 00:14:16.319 "name": "BaseBdev4", 00:14:16.319 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:16.319 "is_configured": true, 00:14:16.319 "data_offset": 0, 00:14:16.319 "data_size": 65536 00:14:16.319 } 00:14:16.319 ] 00:14:16.319 }' 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.319 [2024-12-07 17:30:49.522647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.319 17:30:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.319 [2024-12-07 17:30:49.590074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.579 [2024-12-07 17:30:49.762013] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:16.580 [2024-12-07 17:30:49.870134] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:16.580 [2024-12-07 17:30:49.870217] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.580 "name": "raid_bdev1", 00:14:16.580 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:16.580 "strip_size_kb": 0, 00:14:16.580 "state": "online", 00:14:16.580 "raid_level": "raid1", 00:14:16.580 "superblock": false, 00:14:16.580 "num_base_bdevs": 4, 00:14:16.580 "num_base_bdevs_discovered": 3, 00:14:16.580 "num_base_bdevs_operational": 3, 00:14:16.580 "process": { 00:14:16.580 "type": "rebuild", 00:14:16.580 "target": "spare", 00:14:16.580 "progress": { 00:14:16.580 "blocks": 16384, 00:14:16.580 "percent": 25 00:14:16.580 } 00:14:16.580 }, 00:14:16.580 "base_bdevs_list": [ 00:14:16.580 { 00:14:16.580 "name": "spare", 00:14:16.580 "uuid": "05c6df8e-23fc-5266-8632-e86db35bf5c3", 00:14:16.580 "is_configured": true, 00:14:16.580 "data_offset": 0, 00:14:16.580 "data_size": 65536 00:14:16.580 }, 00:14:16.580 { 00:14:16.580 "name": null, 00:14:16.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.580 "is_configured": false, 00:14:16.580 "data_offset": 0, 00:14:16.580 "data_size": 65536 00:14:16.580 }, 00:14:16.580 { 00:14:16.580 "name": "BaseBdev3", 00:14:16.580 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:16.580 "is_configured": true, 00:14:16.580 "data_offset": 0, 00:14:16.580 "data_size": 65536 00:14:16.580 }, 00:14:16.580 { 00:14:16.580 "name": "BaseBdev4", 00:14:16.580 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:16.580 "is_configured": true, 00:14:16.580 "data_offset": 0, 00:14:16.580 "data_size": 65536 00:14:16.580 } 00:14:16.580 ] 00:14:16.580 }' 00:14:16.580 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.840 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.840 17:30:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=481 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.840 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.840 "name": "raid_bdev1", 00:14:16.840 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:16.840 "strip_size_kb": 0, 00:14:16.840 "state": "online", 00:14:16.840 "raid_level": "raid1", 00:14:16.840 "superblock": false, 00:14:16.840 "num_base_bdevs": 4, 00:14:16.840 "num_base_bdevs_discovered": 3, 00:14:16.840 "num_base_bdevs_operational": 3, 00:14:16.840 "process": { 00:14:16.840 "type": "rebuild", 00:14:16.840 "target": "spare", 00:14:16.840 "progress": { 00:14:16.840 "blocks": 18432, 00:14:16.840 "percent": 28 00:14:16.840 } 00:14:16.840 }, 00:14:16.840 "base_bdevs_list": [ 00:14:16.840 { 00:14:16.840 "name": "spare", 00:14:16.840 "uuid": "05c6df8e-23fc-5266-8632-e86db35bf5c3", 00:14:16.840 "is_configured": true, 00:14:16.840 "data_offset": 0, 00:14:16.840 "data_size": 65536 00:14:16.840 }, 00:14:16.840 { 00:14:16.840 "name": null, 00:14:16.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.840 "is_configured": false, 00:14:16.840 "data_offset": 0, 00:14:16.840 "data_size": 65536 00:14:16.840 }, 00:14:16.840 { 00:14:16.840 "name": "BaseBdev3", 00:14:16.840 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:16.840 "is_configured": true, 00:14:16.840 "data_offset": 0, 00:14:16.840 "data_size": 65536 00:14:16.840 }, 00:14:16.840 { 00:14:16.840 "name": "BaseBdev4", 00:14:16.840 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:16.840 "is_configured": true, 00:14:16.841 "data_offset": 0, 00:14:16.841 "data_size": 65536 00:14:16.841 } 00:14:16.841 ] 00:14:16.841 }' 00:14:16.841 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.841 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.841 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.841 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.841 17:30:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.411 127.00 IOPS, 381.00 MiB/s [2024-12-07T17:30:50.793Z] [2024-12-07 17:30:50.592559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:17.671 [2024-12-07 17:30:50.802602] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.931 113.00 IOPS, 339.00 MiB/s [2024-12-07T17:30:51.313Z] 17:30:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.931 "name": "raid_bdev1", 00:14:17.931 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:17.931 "strip_size_kb": 0, 00:14:17.931 "state": "online", 00:14:17.931 "raid_level": "raid1", 00:14:17.931 "superblock": false, 00:14:17.931 "num_base_bdevs": 4, 00:14:17.931 "num_base_bdevs_discovered": 3, 00:14:17.931 "num_base_bdevs_operational": 3, 00:14:17.931 "process": { 00:14:17.931 "type": "rebuild", 00:14:17.931 "target": "spare", 00:14:17.931 "progress": { 00:14:17.931 "blocks": 36864, 00:14:17.931 "percent": 56 00:14:17.931 } 00:14:17.931 }, 00:14:17.931 "base_bdevs_list": [ 00:14:17.931 { 00:14:17.931 "name": "spare", 00:14:17.931 "uuid": "05c6df8e-23fc-5266-8632-e86db35bf5c3", 00:14:17.931 "is_configured": true, 00:14:17.931 "data_offset": 0, 00:14:17.931 "data_size": 65536 00:14:17.931 }, 00:14:17.931 { 00:14:17.931 "name": null, 00:14:17.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.931 "is_configured": false, 00:14:17.931 "data_offset": 0, 00:14:17.931 "data_size": 65536 00:14:17.931 }, 00:14:17.931 { 00:14:17.931 "name": "BaseBdev3", 00:14:17.931 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:17.931 "is_configured": true, 00:14:17.931 "data_offset": 0, 00:14:17.931 "data_size": 65536 00:14:17.931 }, 00:14:17.931 { 00:14:17.931 "name": "BaseBdev4", 00:14:17.931 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:17.931 "is_configured": true, 00:14:17.931 "data_offset": 0, 00:14:17.931 "data_size": 65536 00:14:17.931 } 00:14:17.931 ] 00:14:17.931 }' 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.931 17:30:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.931 [2024-12-07 17:30:51.290284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:18.191 [2024-12-07 17:30:51.504245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:18.760 [2024-12-07 17:30:51.946607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:19.020 101.17 IOPS, 303.50 MiB/s [2024-12-07T17:30:52.402Z] 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.020 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.020 "name": "raid_bdev1", 00:14:19.020 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:19.020 "strip_size_kb": 0, 00:14:19.020 "state": "online", 00:14:19.020 "raid_level": "raid1", 00:14:19.020 "superblock": false, 00:14:19.020 "num_base_bdevs": 4, 00:14:19.020 "num_base_bdevs_discovered": 3, 00:14:19.020 "num_base_bdevs_operational": 3, 00:14:19.020 "process": { 00:14:19.020 "type": "rebuild", 00:14:19.020 "target": "spare", 00:14:19.020 "progress": { 00:14:19.020 "blocks": 57344, 00:14:19.020 "percent": 87 00:14:19.020 } 00:14:19.020 }, 00:14:19.020 "base_bdevs_list": [ 00:14:19.020 { 00:14:19.020 "name": "spare", 00:14:19.020 "uuid": "05c6df8e-23fc-5266-8632-e86db35bf5c3", 00:14:19.020 "is_configured": true, 00:14:19.020 "data_offset": 0, 00:14:19.020 "data_size": 65536 00:14:19.020 }, 00:14:19.020 { 00:14:19.020 "name": null, 00:14:19.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.020 "is_configured": false, 00:14:19.020 "data_offset": 0, 00:14:19.020 "data_size": 65536 00:14:19.020 }, 00:14:19.020 { 00:14:19.020 "name": "BaseBdev3", 00:14:19.020 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:19.020 "is_configured": true, 00:14:19.020 "data_offset": 0, 00:14:19.020 "data_size": 65536 00:14:19.020 }, 00:14:19.020 { 00:14:19.021 "name": "BaseBdev4", 00:14:19.021 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:19.021 "is_configured": true, 00:14:19.021 "data_offset": 0, 00:14:19.021 "data_size": 65536 00:14:19.021 } 00:14:19.021 ] 00:14:19.021 }' 00:14:19.021 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.021 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.021 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.021 [2024-12-07 17:30:52.377198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:19.021 [2024-12-07 17:30:52.377455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:19.281 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.281 17:30:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.541 [2024-12-07 17:30:52.708365] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:19.541 [2024-12-07 17:30:52.808143] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:19.541 [2024-12-07 17:30:52.809542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.060 91.14 IOPS, 273.43 MiB/s [2024-12-07T17:30:53.442Z] 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.060 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.060 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.060 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.060 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.060 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.060 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.060 17:30:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.060 17:30:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.060 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.320 "name": "raid_bdev1", 00:14:20.320 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:20.320 "strip_size_kb": 0, 00:14:20.320 "state": "online", 00:14:20.320 "raid_level": "raid1", 00:14:20.320 "superblock": false, 00:14:20.320 "num_base_bdevs": 4, 00:14:20.320 "num_base_bdevs_discovered": 3, 00:14:20.320 "num_base_bdevs_operational": 3, 00:14:20.320 "base_bdevs_list": [ 00:14:20.320 { 00:14:20.320 "name": "spare", 00:14:20.320 "uuid": "05c6df8e-23fc-5266-8632-e86db35bf5c3", 00:14:20.320 "is_configured": true, 00:14:20.320 "data_offset": 0, 00:14:20.320 "data_size": 65536 00:14:20.320 }, 00:14:20.320 { 00:14:20.320 "name": null, 00:14:20.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.320 "is_configured": false, 00:14:20.320 "data_offset": 0, 00:14:20.320 "data_size": 65536 00:14:20.320 }, 00:14:20.320 { 00:14:20.320 "name": "BaseBdev3", 00:14:20.320 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:20.320 "is_configured": true, 00:14:20.320 "data_offset": 0, 00:14:20.320 "data_size": 65536 00:14:20.320 }, 00:14:20.320 { 00:14:20.320 "name": "BaseBdev4", 00:14:20.320 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:20.320 "is_configured": true, 00:14:20.320 "data_offset": 0, 00:14:20.320 "data_size": 65536 00:14:20.320 } 00:14:20.320 ] 00:14:20.320 }' 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.320 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.320 "name": "raid_bdev1", 00:14:20.320 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:20.320 "strip_size_kb": 0, 00:14:20.321 "state": "online", 00:14:20.321 "raid_level": "raid1", 00:14:20.321 "superblock": false, 00:14:20.321 "num_base_bdevs": 4, 00:14:20.321 "num_base_bdevs_discovered": 3, 00:14:20.321 "num_base_bdevs_operational": 3, 00:14:20.321 "base_bdevs_list": [ 00:14:20.321 { 00:14:20.321 "name": "spare", 00:14:20.321 "uuid": "05c6df8e-23fc-5266-8632-e86db35bf5c3", 00:14:20.321 "is_configured": true, 00:14:20.321 "data_offset": 0, 00:14:20.321 "data_size": 65536 00:14:20.321 }, 00:14:20.321 { 00:14:20.321 "name": null, 00:14:20.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.321 "is_configured": false, 00:14:20.321 "data_offset": 0, 00:14:20.321 "data_size": 65536 00:14:20.321 }, 00:14:20.321 { 00:14:20.321 "name": "BaseBdev3", 00:14:20.321 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:20.321 "is_configured": true, 00:14:20.321 "data_offset": 0, 00:14:20.321 "data_size": 65536 00:14:20.321 }, 00:14:20.321 { 00:14:20.321 "name": "BaseBdev4", 00:14:20.321 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:20.321 "is_configured": true, 00:14:20.321 "data_offset": 0, 00:14:20.321 "data_size": 65536 00:14:20.321 } 00:14:20.321 ] 00:14:20.321 }' 00:14:20.321 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.321 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.321 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.580 "name": "raid_bdev1", 00:14:20.580 "uuid": "b1001f2b-a8fa-4b34-8c86-d1cbeee7f06b", 00:14:20.580 "strip_size_kb": 0, 00:14:20.580 "state": "online", 00:14:20.580 "raid_level": "raid1", 00:14:20.580 "superblock": false, 00:14:20.580 "num_base_bdevs": 4, 00:14:20.580 "num_base_bdevs_discovered": 3, 00:14:20.580 "num_base_bdevs_operational": 3, 00:14:20.580 "base_bdevs_list": [ 00:14:20.580 { 00:14:20.580 "name": "spare", 00:14:20.580 "uuid": "05c6df8e-23fc-5266-8632-e86db35bf5c3", 00:14:20.580 "is_configured": true, 00:14:20.580 "data_offset": 0, 00:14:20.580 "data_size": 65536 00:14:20.580 }, 00:14:20.580 { 00:14:20.580 "name": null, 00:14:20.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.580 "is_configured": false, 00:14:20.580 "data_offset": 0, 00:14:20.580 "data_size": 65536 00:14:20.580 }, 00:14:20.580 { 00:14:20.580 "name": "BaseBdev3", 00:14:20.580 "uuid": "ef375dd1-0862-5875-8989-6c42c446a09a", 00:14:20.580 "is_configured": true, 00:14:20.580 "data_offset": 0, 00:14:20.580 "data_size": 65536 00:14:20.580 }, 00:14:20.580 { 00:14:20.580 "name": "BaseBdev4", 00:14:20.580 "uuid": "afc64435-22d3-5d1b-ae32-a58fa175f3a3", 00:14:20.580 "is_configured": true, 00:14:20.580 "data_offset": 0, 00:14:20.580 "data_size": 65536 00:14:20.580 } 00:14:20.580 ] 00:14:20.580 }' 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.580 17:30:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.840 84.75 IOPS, 254.25 MiB/s [2024-12-07T17:30:54.222Z] 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:20.840 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.840 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.840 [2024-12-07 17:30:54.166885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.840 [2024-12-07 17:30:54.166918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.100 00:14:21.100 Latency(us) 00:14:21.100 [2024-12-07T17:30:54.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.100 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:21.100 raid_bdev1 : 8.13 83.67 251.00 0.00 0.00 17211.58 300.49 124547.02 00:14:21.100 [2024-12-07T17:30:54.482Z] =================================================================================================================== 00:14:21.100 [2024-12-07T17:30:54.482Z] Total : 83.67 251.00 0.00 0.00 17211.58 300.49 124547.02 00:14:21.100 [2024-12-07 17:30:54.254963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.100 [2024-12-07 17:30:54.255026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.100 [2024-12-07 17:30:54.255120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.100 [2024-12-07 17:30:54.255129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:21.100 { 00:14:21.100 "results": [ 00:14:21.100 { 00:14:21.100 "job": "raid_bdev1", 00:14:21.100 "core_mask": "0x1", 00:14:21.100 "workload": "randrw", 00:14:21.100 "percentage": 50, 00:14:21.100 "status": "finished", 00:14:21.100 "queue_depth": 2, 00:14:21.100 "io_size": 3145728, 00:14:21.100 "runtime": 8.127643, 00:14:21.100 "iops": 83.66509208143124, 00:14:21.100 "mibps": 250.99527624429373, 00:14:21.100 "io_failed": 0, 00:14:21.100 "io_timeout": 0, 00:14:21.100 "avg_latency_us": 17211.577703570514, 00:14:21.100 "min_latency_us": 300.49257641921395, 00:14:21.100 "max_latency_us": 124547.01834061135 00:14:21.100 } 00:14:21.100 ], 00:14:21.100 "core_count": 1 00:14:21.100 } 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.100 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:21.360 /dev/nbd0 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.360 1+0 records in 00:14:21.360 1+0 records out 00:14:21.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398141 s, 10.3 MB/s 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:21.360 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:21.361 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.361 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:21.361 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.361 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:21.361 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.361 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.361 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.361 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.361 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:21.621 /dev/nbd1 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.621 1+0 records in 00:14:21.621 1+0 records out 00:14:21.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597476 s, 6.9 MB/s 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.621 17:30:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.879 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:22.138 /dev/nbd1 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.138 1+0 records in 00:14:22.138 1+0 records out 00:14:22.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275943 s, 14.8 MB/s 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.138 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.397 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:22.656 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:22.656 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:22.656 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:22.656 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.656 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.656 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:22.656 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:22.656 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.656 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78751 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78751 ']' 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78751 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78751 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78751' 00:14:22.657 killing process with pid 78751 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78751 00:14:22.657 Received shutdown signal, test time was about 9.890995 seconds 00:14:22.657 00:14:22.657 Latency(us) 00:14:22.657 [2024-12-07T17:30:56.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.657 [2024-12-07T17:30:56.039Z] =================================================================================================================== 00:14:22.657 [2024-12-07T17:30:56.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.657 [2024-12-07 17:30:55.994697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.657 17:30:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78751 00:14:23.249 [2024-12-07 17:30:56.395051] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.198 17:30:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:24.198 00:14:24.198 real 0m13.209s 00:14:24.198 user 0m16.515s 00:14:24.198 sys 0m1.899s 00:14:24.198 17:30:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.198 ************************************ 00:14:24.198 END TEST raid_rebuild_test_io 00:14:24.198 ************************************ 00:14:24.198 17:30:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.198 17:30:57 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:24.198 17:30:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:24.198 17:30:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.198 17:30:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 ************************************ 00:14:24.458 START TEST raid_rebuild_test_sb_io 00:14:24.458 ************************************ 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79156 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79156 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79156 ']' 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.458 17:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 [2024-12-07 17:30:57.685133] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:24.458 [2024-12-07 17:30:57.685326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:24.458 Zero copy mechanism will not be used. 00:14:24.458 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79156 ] 00:14:24.719 [2024-12-07 17:30:57.858569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.720 [2024-12-07 17:30:57.961906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.979 [2024-12-07 17:30:58.147804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.979 [2024-12-07 17:30:58.147960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 BaseBdev1_malloc 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 [2024-12-07 17:30:58.551056] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:25.239 [2024-12-07 17:30:58.551155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.239 [2024-12-07 17:30:58.551183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:25.239 [2024-12-07 17:30:58.551194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.239 [2024-12-07 17:30:58.553247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.239 [2024-12-07 17:30:58.553291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:25.239 BaseBdev1 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 BaseBdev2_malloc 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 [2024-12-07 17:30:58.604561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:25.239 [2024-12-07 17:30:58.604623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.239 [2024-12-07 17:30:58.604645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:25.239 [2024-12-07 17:30:58.604656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.239 [2024-12-07 17:30:58.606635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.239 [2024-12-07 17:30:58.606722] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:25.239 BaseBdev2 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.239 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.501 BaseBdev3_malloc 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.501 [2024-12-07 17:30:58.669581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:25.501 [2024-12-07 17:30:58.669636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.501 [2024-12-07 17:30:58.669659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:25.501 [2024-12-07 17:30:58.669670] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.501 [2024-12-07 17:30:58.671716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.501 [2024-12-07 17:30:58.671755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:25.501 BaseBdev3 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.501 BaseBdev4_malloc 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.501 [2024-12-07 17:30:58.722463] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:25.501 [2024-12-07 17:30:58.722520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.501 [2024-12-07 17:30:58.722541] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:25.501 [2024-12-07 17:30:58.722551] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.501 [2024-12-07 17:30:58.724551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.501 [2024-12-07 17:30:58.724593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:25.501 BaseBdev4 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.501 spare_malloc 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.501 spare_delay 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.501 [2024-12-07 17:30:58.799912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:25.501 [2024-12-07 17:30:58.800000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.501 [2024-12-07 17:30:58.800038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:25.501 [2024-12-07 17:30:58.800054] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.501 [2024-12-07 17:30:58.802329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.501 [2024-12-07 17:30:58.802372] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:25.501 spare 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.501 [2024-12-07 17:30:58.807955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.501 [2024-12-07 17:30:58.809911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.501 [2024-12-07 17:30:58.809996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.501 [2024-12-07 17:30:58.810055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.501 [2024-12-07 17:30:58.810266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:25.501 [2024-12-07 17:30:58.810291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:25.501 [2024-12-07 17:30:58.810552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:25.501 [2024-12-07 17:30:58.810763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:25.501 [2024-12-07 17:30:58.810775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:25.501 [2024-12-07 17:30:58.810944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.501 "name": "raid_bdev1", 00:14:25.501 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:25.501 "strip_size_kb": 0, 00:14:25.501 "state": "online", 00:14:25.501 "raid_level": "raid1", 00:14:25.501 "superblock": true, 00:14:25.501 "num_base_bdevs": 4, 00:14:25.501 "num_base_bdevs_discovered": 4, 00:14:25.501 "num_base_bdevs_operational": 4, 00:14:25.501 "base_bdevs_list": [ 00:14:25.501 { 00:14:25.501 "name": "BaseBdev1", 00:14:25.501 "uuid": "adb85a0a-7550-5a1e-adb1-51fe51a7af14", 00:14:25.501 "is_configured": true, 00:14:25.501 "data_offset": 2048, 00:14:25.501 "data_size": 63488 00:14:25.501 }, 00:14:25.501 { 00:14:25.501 "name": "BaseBdev2", 00:14:25.501 "uuid": "f09ff823-c80d-5023-83af-8d877f751bd0", 00:14:25.501 "is_configured": true, 00:14:25.501 "data_offset": 2048, 00:14:25.501 "data_size": 63488 00:14:25.501 }, 00:14:25.501 { 00:14:25.501 "name": "BaseBdev3", 00:14:25.501 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:25.501 "is_configured": true, 00:14:25.501 "data_offset": 2048, 00:14:25.501 "data_size": 63488 00:14:25.501 }, 00:14:25.501 { 00:14:25.501 "name": "BaseBdev4", 00:14:25.501 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:25.501 "is_configured": true, 00:14:25.501 "data_offset": 2048, 00:14:25.501 "data_size": 63488 00:14:25.501 } 00:14:25.501 ] 00:14:25.501 }' 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.501 17:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:26.071 [2024-12-07 17:30:59.279520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.071 [2024-12-07 17:30:59.355129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.071 "name": "raid_bdev1", 00:14:26.071 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:26.071 "strip_size_kb": 0, 00:14:26.071 "state": "online", 00:14:26.071 "raid_level": "raid1", 00:14:26.071 "superblock": true, 00:14:26.071 "num_base_bdevs": 4, 00:14:26.071 "num_base_bdevs_discovered": 3, 00:14:26.071 "num_base_bdevs_operational": 3, 00:14:26.071 "base_bdevs_list": [ 00:14:26.071 { 00:14:26.071 "name": null, 00:14:26.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.071 "is_configured": false, 00:14:26.071 "data_offset": 0, 00:14:26.071 "data_size": 63488 00:14:26.071 }, 00:14:26.071 { 00:14:26.071 "name": "BaseBdev2", 00:14:26.071 "uuid": "f09ff823-c80d-5023-83af-8d877f751bd0", 00:14:26.071 "is_configured": true, 00:14:26.071 "data_offset": 2048, 00:14:26.071 "data_size": 63488 00:14:26.071 }, 00:14:26.071 { 00:14:26.071 "name": "BaseBdev3", 00:14:26.071 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:26.071 "is_configured": true, 00:14:26.071 "data_offset": 2048, 00:14:26.071 "data_size": 63488 00:14:26.071 }, 00:14:26.071 { 00:14:26.071 "name": "BaseBdev4", 00:14:26.071 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:26.071 "is_configured": true, 00:14:26.071 "data_offset": 2048, 00:14:26.071 "data_size": 63488 00:14:26.071 } 00:14:26.071 ] 00:14:26.071 }' 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.071 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.071 [2024-12-07 17:30:59.439740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:26.071 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:26.071 Zero copy mechanism will not be used. 00:14:26.071 Running I/O for 60 seconds... 00:14:26.642 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.642 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.642 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.643 [2024-12-07 17:30:59.808375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.643 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.643 17:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:26.643 [2024-12-07 17:30:59.885387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:26.643 [2024-12-07 17:30:59.887629] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.903 [2024-12-07 17:31:00.183186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:26.903 [2024-12-07 17:31:00.183440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:27.163 171.00 IOPS, 513.00 MiB/s [2024-12-07T17:31:00.545Z] [2024-12-07 17:31:00.507991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:27.423 [2024-12-07 17:31:00.624934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.683 "name": "raid_bdev1", 00:14:27.683 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:27.683 "strip_size_kb": 0, 00:14:27.683 "state": "online", 00:14:27.683 "raid_level": "raid1", 00:14:27.683 "superblock": true, 00:14:27.683 "num_base_bdevs": 4, 00:14:27.683 "num_base_bdevs_discovered": 4, 00:14:27.683 "num_base_bdevs_operational": 4, 00:14:27.683 "process": { 00:14:27.683 "type": "rebuild", 00:14:27.683 "target": "spare", 00:14:27.683 "progress": { 00:14:27.683 "blocks": 14336, 00:14:27.683 "percent": 22 00:14:27.683 } 00:14:27.683 }, 00:14:27.683 "base_bdevs_list": [ 00:14:27.683 { 00:14:27.683 "name": "spare", 00:14:27.683 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:27.683 "is_configured": true, 00:14:27.683 "data_offset": 2048, 00:14:27.683 "data_size": 63488 00:14:27.683 }, 00:14:27.683 { 00:14:27.683 "name": "BaseBdev2", 00:14:27.683 "uuid": "f09ff823-c80d-5023-83af-8d877f751bd0", 00:14:27.683 "is_configured": true, 00:14:27.683 "data_offset": 2048, 00:14:27.683 "data_size": 63488 00:14:27.683 }, 00:14:27.683 { 00:14:27.683 "name": "BaseBdev3", 00:14:27.683 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:27.683 "is_configured": true, 00:14:27.683 "data_offset": 2048, 00:14:27.683 "data_size": 63488 00:14:27.683 }, 00:14:27.683 { 00:14:27.683 "name": "BaseBdev4", 00:14:27.683 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:27.683 "is_configured": true, 00:14:27.683 "data_offset": 2048, 00:14:27.683 "data_size": 63488 00:14:27.683 } 00:14:27.683 ] 00:14:27.683 }' 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.683 17:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.683 [2024-12-07 17:31:00.995097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.683 [2024-12-07 17:31:00.997300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:27.943 [2024-12-07 17:31:01.104899] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.943 [2024-12-07 17:31:01.122694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.943 [2024-12-07 17:31:01.122836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.943 [2024-12-07 17:31:01.122858] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.943 [2024-12-07 17:31:01.149599] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.943 "name": "raid_bdev1", 00:14:27.943 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:27.943 "strip_size_kb": 0, 00:14:27.943 "state": "online", 00:14:27.943 "raid_level": "raid1", 00:14:27.943 "superblock": true, 00:14:27.943 "num_base_bdevs": 4, 00:14:27.943 "num_base_bdevs_discovered": 3, 00:14:27.943 "num_base_bdevs_operational": 3, 00:14:27.943 "base_bdevs_list": [ 00:14:27.943 { 00:14:27.943 "name": null, 00:14:27.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.943 "is_configured": false, 00:14:27.943 "data_offset": 0, 00:14:27.943 "data_size": 63488 00:14:27.943 }, 00:14:27.943 { 00:14:27.943 "name": "BaseBdev2", 00:14:27.943 "uuid": "f09ff823-c80d-5023-83af-8d877f751bd0", 00:14:27.943 "is_configured": true, 00:14:27.943 "data_offset": 2048, 00:14:27.943 "data_size": 63488 00:14:27.943 }, 00:14:27.943 { 00:14:27.943 "name": "BaseBdev3", 00:14:27.943 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:27.943 "is_configured": true, 00:14:27.943 "data_offset": 2048, 00:14:27.943 "data_size": 63488 00:14:27.943 }, 00:14:27.943 { 00:14:27.943 "name": "BaseBdev4", 00:14:27.943 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:27.943 "is_configured": true, 00:14:27.943 "data_offset": 2048, 00:14:27.943 "data_size": 63488 00:14:27.943 } 00:14:27.943 ] 00:14:27.943 }' 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.943 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.203 158.00 IOPS, 474.00 MiB/s [2024-12-07T17:31:01.585Z] 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.203 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.203 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.203 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.203 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.463 "name": "raid_bdev1", 00:14:28.463 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:28.463 "strip_size_kb": 0, 00:14:28.463 "state": "online", 00:14:28.463 "raid_level": "raid1", 00:14:28.463 "superblock": true, 00:14:28.463 "num_base_bdevs": 4, 00:14:28.463 "num_base_bdevs_discovered": 3, 00:14:28.463 "num_base_bdevs_operational": 3, 00:14:28.463 "base_bdevs_list": [ 00:14:28.463 { 00:14:28.463 "name": null, 00:14:28.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.463 "is_configured": false, 00:14:28.463 "data_offset": 0, 00:14:28.463 "data_size": 63488 00:14:28.463 }, 00:14:28.463 { 00:14:28.463 "name": "BaseBdev2", 00:14:28.463 "uuid": "f09ff823-c80d-5023-83af-8d877f751bd0", 00:14:28.463 "is_configured": true, 00:14:28.463 "data_offset": 2048, 00:14:28.463 "data_size": 63488 00:14:28.463 }, 00:14:28.463 { 00:14:28.463 "name": "BaseBdev3", 00:14:28.463 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:28.463 "is_configured": true, 00:14:28.463 "data_offset": 2048, 00:14:28.463 "data_size": 63488 00:14:28.463 }, 00:14:28.463 { 00:14:28.463 "name": "BaseBdev4", 00:14:28.463 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:28.463 "is_configured": true, 00:14:28.463 "data_offset": 2048, 00:14:28.463 "data_size": 63488 00:14:28.463 } 00:14:28.463 ] 00:14:28.463 }' 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.463 [2024-12-07 17:31:01.725081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.463 17:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:28.463 [2024-12-07 17:31:01.806506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:28.463 [2024-12-07 17:31:01.808567] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.724 [2024-12-07 17:31:02.056071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:28.724 [2024-12-07 17:31:02.056797] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:29.293 166.00 IOPS, 498.00 MiB/s [2024-12-07T17:31:02.675Z] [2024-12-07 17:31:02.522230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.552 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.553 "name": "raid_bdev1", 00:14:29.553 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:29.553 "strip_size_kb": 0, 00:14:29.553 "state": "online", 00:14:29.553 "raid_level": "raid1", 00:14:29.553 "superblock": true, 00:14:29.553 "num_base_bdevs": 4, 00:14:29.553 "num_base_bdevs_discovered": 4, 00:14:29.553 "num_base_bdevs_operational": 4, 00:14:29.553 "process": { 00:14:29.553 "type": "rebuild", 00:14:29.553 "target": "spare", 00:14:29.553 "progress": { 00:14:29.553 "blocks": 12288, 00:14:29.553 "percent": 19 00:14:29.553 } 00:14:29.553 }, 00:14:29.553 "base_bdevs_list": [ 00:14:29.553 { 00:14:29.553 "name": "spare", 00:14:29.553 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:29.553 "is_configured": true, 00:14:29.553 "data_offset": 2048, 00:14:29.553 "data_size": 63488 00:14:29.553 }, 00:14:29.553 { 00:14:29.553 "name": "BaseBdev2", 00:14:29.553 "uuid": "f09ff823-c80d-5023-83af-8d877f751bd0", 00:14:29.553 "is_configured": true, 00:14:29.553 "data_offset": 2048, 00:14:29.553 "data_size": 63488 00:14:29.553 }, 00:14:29.553 { 00:14:29.553 "name": "BaseBdev3", 00:14:29.553 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:29.553 "is_configured": true, 00:14:29.553 "data_offset": 2048, 00:14:29.553 "data_size": 63488 00:14:29.553 }, 00:14:29.553 { 00:14:29.553 "name": "BaseBdev4", 00:14:29.553 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:29.553 "is_configured": true, 00:14:29.553 "data_offset": 2048, 00:14:29.553 "data_size": 63488 00:14:29.553 } 00:14:29.553 ] 00:14:29.553 }' 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.553 [2024-12-07 17:31:02.896929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:29.553 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.553 17:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.553 [2024-12-07 17:31:02.917026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.812 [2024-12-07 17:31:03.114318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:30.072 [2024-12-07 17:31:03.318537] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:30.072 [2024-12-07 17:31:03.318569] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.072 "name": "raid_bdev1", 00:14:30.072 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:30.072 "strip_size_kb": 0, 00:14:30.072 "state": "online", 00:14:30.072 "raid_level": "raid1", 00:14:30.072 "superblock": true, 00:14:30.072 "num_base_bdevs": 4, 00:14:30.072 "num_base_bdevs_discovered": 3, 00:14:30.072 "num_base_bdevs_operational": 3, 00:14:30.072 "process": { 00:14:30.072 "type": "rebuild", 00:14:30.072 "target": "spare", 00:14:30.072 "progress": { 00:14:30.072 "blocks": 16384, 00:14:30.072 "percent": 25 00:14:30.072 } 00:14:30.072 }, 00:14:30.072 "base_bdevs_list": [ 00:14:30.072 { 00:14:30.072 "name": "spare", 00:14:30.072 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:30.072 "is_configured": true, 00:14:30.072 "data_offset": 2048, 00:14:30.072 "data_size": 63488 00:14:30.072 }, 00:14:30.072 { 00:14:30.072 "name": null, 00:14:30.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.072 "is_configured": false, 00:14:30.072 "data_offset": 0, 00:14:30.072 "data_size": 63488 00:14:30.072 }, 00:14:30.072 { 00:14:30.072 "name": "BaseBdev3", 00:14:30.072 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:30.072 "is_configured": true, 00:14:30.072 "data_offset": 2048, 00:14:30.072 "data_size": 63488 00:14:30.072 }, 00:14:30.072 { 00:14:30.072 "name": "BaseBdev4", 00:14:30.072 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:30.072 "is_configured": true, 00:14:30.072 "data_offset": 2048, 00:14:30.072 "data_size": 63488 00:14:30.072 } 00:14:30.072 ] 00:14:30.072 }' 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.072 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.331 133.25 IOPS, 399.75 MiB/s [2024-12-07T17:31:03.713Z] 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.331 "name": "raid_bdev1", 00:14:30.331 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:30.331 "strip_size_kb": 0, 00:14:30.331 "state": "online", 00:14:30.331 "raid_level": "raid1", 00:14:30.331 "superblock": true, 00:14:30.331 "num_base_bdevs": 4, 00:14:30.331 "num_base_bdevs_discovered": 3, 00:14:30.331 "num_base_bdevs_operational": 3, 00:14:30.331 "process": { 00:14:30.331 "type": "rebuild", 00:14:30.331 "target": "spare", 00:14:30.331 "progress": { 00:14:30.331 "blocks": 18432, 00:14:30.331 "percent": 29 00:14:30.331 } 00:14:30.331 }, 00:14:30.331 "base_bdevs_list": [ 00:14:30.331 { 00:14:30.331 "name": "spare", 00:14:30.331 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:30.331 "is_configured": true, 00:14:30.331 "data_offset": 2048, 00:14:30.331 "data_size": 63488 00:14:30.331 }, 00:14:30.331 { 00:14:30.331 "name": null, 00:14:30.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.331 "is_configured": false, 00:14:30.331 "data_offset": 0, 00:14:30.331 "data_size": 63488 00:14:30.331 }, 00:14:30.331 { 00:14:30.331 "name": "BaseBdev3", 00:14:30.331 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:30.331 "is_configured": true, 00:14:30.331 "data_offset": 2048, 00:14:30.331 "data_size": 63488 00:14:30.331 }, 00:14:30.331 { 00:14:30.331 "name": "BaseBdev4", 00:14:30.331 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:30.331 "is_configured": true, 00:14:30.331 "data_offset": 2048, 00:14:30.331 "data_size": 63488 00:14:30.331 } 00:14:30.331 ] 00:14:30.331 }' 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.331 17:31:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.331 [2024-12-07 17:31:03.673205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:30.901 [2024-12-07 17:31:04.123037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:30.901 [2024-12-07 17:31:04.123650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:31.161 117.60 IOPS, 352.80 MiB/s [2024-12-07T17:31:04.543Z] [2024-12-07 17:31:04.460808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:31.161 [2024-12-07 17:31:04.461216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:31.420 [2024-12-07 17:31:04.583906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.420 "name": "raid_bdev1", 00:14:31.420 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:31.420 "strip_size_kb": 0, 00:14:31.420 "state": "online", 00:14:31.420 "raid_level": "raid1", 00:14:31.420 "superblock": true, 00:14:31.420 "num_base_bdevs": 4, 00:14:31.420 "num_base_bdevs_discovered": 3, 00:14:31.420 "num_base_bdevs_operational": 3, 00:14:31.420 "process": { 00:14:31.420 "type": "rebuild", 00:14:31.420 "target": "spare", 00:14:31.420 "progress": { 00:14:31.420 "blocks": 34816, 00:14:31.420 "percent": 54 00:14:31.420 } 00:14:31.420 }, 00:14:31.420 "base_bdevs_list": [ 00:14:31.420 { 00:14:31.420 "name": "spare", 00:14:31.420 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:31.420 "is_configured": true, 00:14:31.420 "data_offset": 2048, 00:14:31.420 "data_size": 63488 00:14:31.420 }, 00:14:31.420 { 00:14:31.420 "name": null, 00:14:31.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.420 "is_configured": false, 00:14:31.420 "data_offset": 0, 00:14:31.420 "data_size": 63488 00:14:31.420 }, 00:14:31.420 { 00:14:31.420 "name": "BaseBdev3", 00:14:31.420 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:31.420 "is_configured": true, 00:14:31.420 "data_offset": 2048, 00:14:31.420 "data_size": 63488 00:14:31.420 }, 00:14:31.420 { 00:14:31.420 "name": "BaseBdev4", 00:14:31.420 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:31.420 "is_configured": true, 00:14:31.420 "data_offset": 2048, 00:14:31.420 "data_size": 63488 00:14:31.420 } 00:14:31.420 ] 00:14:31.420 }' 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.420 17:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.680 [2024-12-07 17:31:04.850676] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:31.680 [2024-12-07 17:31:05.057664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:32.246 [2024-12-07 17:31:05.401444] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:32.246 104.17 IOPS, 312.50 MiB/s [2024-12-07T17:31:05.628Z] [2024-12-07 17:31:05.614409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.506 "name": "raid_bdev1", 00:14:32.506 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:32.506 "strip_size_kb": 0, 00:14:32.506 "state": "online", 00:14:32.506 "raid_level": "raid1", 00:14:32.506 "superblock": true, 00:14:32.506 "num_base_bdevs": 4, 00:14:32.506 "num_base_bdevs_discovered": 3, 00:14:32.506 "num_base_bdevs_operational": 3, 00:14:32.506 "process": { 00:14:32.506 "type": "rebuild", 00:14:32.506 "target": "spare", 00:14:32.506 "progress": { 00:14:32.506 "blocks": 47104, 00:14:32.506 "percent": 74 00:14:32.506 } 00:14:32.506 }, 00:14:32.506 "base_bdevs_list": [ 00:14:32.506 { 00:14:32.506 "name": "spare", 00:14:32.506 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:32.506 "is_configured": true, 00:14:32.506 "data_offset": 2048, 00:14:32.506 "data_size": 63488 00:14:32.506 }, 00:14:32.506 { 00:14:32.506 "name": null, 00:14:32.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.506 "is_configured": false, 00:14:32.506 "data_offset": 0, 00:14:32.506 "data_size": 63488 00:14:32.506 }, 00:14:32.506 { 00:14:32.506 "name": "BaseBdev3", 00:14:32.506 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:32.506 "is_configured": true, 00:14:32.506 "data_offset": 2048, 00:14:32.506 "data_size": 63488 00:14:32.506 }, 00:14:32.506 { 00:14:32.506 "name": "BaseBdev4", 00:14:32.506 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:32.506 "is_configured": true, 00:14:32.506 "data_offset": 2048, 00:14:32.506 "data_size": 63488 00:14:32.506 } 00:14:32.506 ] 00:14:32.506 }' 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.506 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.765 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.765 17:31:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:32.765 [2024-12-07 17:31:05.942970] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:33.592 93.86 IOPS, 281.57 MiB/s [2024-12-07T17:31:06.974Z] [2024-12-07 17:31:06.715176] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:33.592 [2024-12-07 17:31:06.814953] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:33.592 [2024-12-07 17:31:06.826036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.592 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.592 "name": "raid_bdev1", 00:14:33.592 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:33.592 "strip_size_kb": 0, 00:14:33.592 "state": "online", 00:14:33.592 "raid_level": "raid1", 00:14:33.592 "superblock": true, 00:14:33.592 "num_base_bdevs": 4, 00:14:33.592 "num_base_bdevs_discovered": 3, 00:14:33.592 "num_base_bdevs_operational": 3, 00:14:33.592 "base_bdevs_list": [ 00:14:33.592 { 00:14:33.592 "name": "spare", 00:14:33.592 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:33.592 "is_configured": true, 00:14:33.592 "data_offset": 2048, 00:14:33.592 "data_size": 63488 00:14:33.592 }, 00:14:33.592 { 00:14:33.592 "name": null, 00:14:33.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.592 "is_configured": false, 00:14:33.592 "data_offset": 0, 00:14:33.592 "data_size": 63488 00:14:33.592 }, 00:14:33.592 { 00:14:33.592 "name": "BaseBdev3", 00:14:33.592 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:33.592 "is_configured": true, 00:14:33.592 "data_offset": 2048, 00:14:33.592 "data_size": 63488 00:14:33.592 }, 00:14:33.592 { 00:14:33.592 "name": "BaseBdev4", 00:14:33.592 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:33.592 "is_configured": true, 00:14:33.592 "data_offset": 2048, 00:14:33.592 "data_size": 63488 00:14:33.592 } 00:14:33.592 ] 00:14:33.592 }' 00:14:33.851 17:31:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.851 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.851 "name": "raid_bdev1", 00:14:33.851 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:33.851 "strip_size_kb": 0, 00:14:33.851 "state": "online", 00:14:33.851 "raid_level": "raid1", 00:14:33.851 "superblock": true, 00:14:33.851 "num_base_bdevs": 4, 00:14:33.851 "num_base_bdevs_discovered": 3, 00:14:33.851 "num_base_bdevs_operational": 3, 00:14:33.851 "base_bdevs_list": [ 00:14:33.851 { 00:14:33.851 "name": "spare", 00:14:33.851 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:33.851 "is_configured": true, 00:14:33.851 "data_offset": 2048, 00:14:33.851 "data_size": 63488 00:14:33.851 }, 00:14:33.851 { 00:14:33.851 "name": null, 00:14:33.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.851 "is_configured": false, 00:14:33.851 "data_offset": 0, 00:14:33.851 "data_size": 63488 00:14:33.851 }, 00:14:33.851 { 00:14:33.851 "name": "BaseBdev3", 00:14:33.851 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:33.851 "is_configured": true, 00:14:33.851 "data_offset": 2048, 00:14:33.851 "data_size": 63488 00:14:33.851 }, 00:14:33.851 { 00:14:33.851 "name": "BaseBdev4", 00:14:33.851 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:33.851 "is_configured": true, 00:14:33.851 "data_offset": 2048, 00:14:33.851 "data_size": 63488 00:14:33.851 } 00:14:33.851 ] 00:14:33.851 }' 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.852 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.110 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.110 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.110 "name": "raid_bdev1", 00:14:34.110 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:34.110 "strip_size_kb": 0, 00:14:34.110 "state": "online", 00:14:34.110 "raid_level": "raid1", 00:14:34.110 "superblock": true, 00:14:34.110 "num_base_bdevs": 4, 00:14:34.110 "num_base_bdevs_discovered": 3, 00:14:34.110 "num_base_bdevs_operational": 3, 00:14:34.110 "base_bdevs_list": [ 00:14:34.110 { 00:14:34.110 "name": "spare", 00:14:34.110 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:34.110 "is_configured": true, 00:14:34.110 "data_offset": 2048, 00:14:34.110 "data_size": 63488 00:14:34.110 }, 00:14:34.110 { 00:14:34.110 "name": null, 00:14:34.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.110 "is_configured": false, 00:14:34.110 "data_offset": 0, 00:14:34.110 "data_size": 63488 00:14:34.110 }, 00:14:34.110 { 00:14:34.110 "name": "BaseBdev3", 00:14:34.110 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:34.110 "is_configured": true, 00:14:34.110 "data_offset": 2048, 00:14:34.110 "data_size": 63488 00:14:34.110 }, 00:14:34.110 { 00:14:34.110 "name": "BaseBdev4", 00:14:34.111 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:34.111 "is_configured": true, 00:14:34.111 "data_offset": 2048, 00:14:34.111 "data_size": 63488 00:14:34.111 } 00:14:34.111 ] 00:14:34.111 }' 00:14:34.111 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.111 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.370 86.12 IOPS, 258.38 MiB/s [2024-12-07T17:31:07.752Z] 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:34.370 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.370 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.370 [2024-12-07 17:31:07.687133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.370 [2024-12-07 17:31:07.687187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.370 00:14:34.370 Latency(us) 00:14:34.370 [2024-12-07T17:31:07.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.370 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:34.370 raid_bdev1 : 8.29 84.09 252.27 0.00 0.00 17000.30 327.32 115847.04 00:14:34.370 [2024-12-07T17:31:07.752Z] =================================================================================================================== 00:14:34.370 [2024-12-07T17:31:07.752Z] Total : 84.09 252.27 0.00 0.00 17000.30 327.32 115847.04 00:14:34.370 [2024-12-07 17:31:07.736547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.370 [2024-12-07 17:31:07.736628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.370 [2024-12-07 17:31:07.736743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.370 [2024-12-07 17:31:07.736756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:34.370 { 00:14:34.370 "results": [ 00:14:34.370 { 00:14:34.370 "job": "raid_bdev1", 00:14:34.370 "core_mask": "0x1", 00:14:34.370 "workload": "randrw", 00:14:34.370 "percentage": 50, 00:14:34.370 "status": "finished", 00:14:34.370 "queue_depth": 2, 00:14:34.370 "io_size": 3145728, 00:14:34.370 "runtime": 8.288871, 00:14:34.370 "iops": 84.0886533280588, 00:14:34.370 "mibps": 252.2659599841764, 00:14:34.370 "io_failed": 0, 00:14:34.370 "io_timeout": 0, 00:14:34.370 "avg_latency_us": 17000.29899318978, 00:14:34.370 "min_latency_us": 327.32227074235806, 00:14:34.370 "max_latency_us": 115847.04279475982 00:14:34.370 } 00:14:34.370 ], 00:14:34.370 "core_count": 1 00:14:34.370 } 00:14:34.370 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.370 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.370 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:34.370 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.370 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.630 17:31:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:34.630 /dev/nbd0 00:14:34.630 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:34.889 1+0 records in 00:14:34.889 1+0 records out 00:14:34.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355288 s, 11.5 MB/s 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:34.889 /dev/nbd1 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:34.889 1+0 records in 00:14:34.889 1+0 records out 00:14:34.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569917 s, 7.2 MB/s 00:14:34.889 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.148 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.408 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:35.667 /dev/nbd1 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.668 1+0 records in 00:14:35.668 1+0 records out 00:14:35.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354789 s, 11.5 MB/s 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.668 17:31:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.929 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.194 [2024-12-07 17:31:09.432789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:36.194 [2024-12-07 17:31:09.432943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.194 [2024-12-07 17:31:09.432993] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:36.194 [2024-12-07 17:31:09.433030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.194 [2024-12-07 17:31:09.435628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.194 [2024-12-07 17:31:09.435725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:36.194 [2024-12-07 17:31:09.435872] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:36.194 [2024-12-07 17:31:09.435985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.194 [2024-12-07 17:31:09.436193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.194 [2024-12-07 17:31:09.436345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:36.194 spare 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.194 [2024-12-07 17:31:09.536295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:36.194 [2024-12-07 17:31:09.536372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:36.194 [2024-12-07 17:31:09.536697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:36.194 [2024-12-07 17:31:09.536921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:36.194 [2024-12-07 17:31:09.536989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:36.194 [2024-12-07 17:31:09.537193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.194 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.454 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.454 "name": "raid_bdev1", 00:14:36.454 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:36.454 "strip_size_kb": 0, 00:14:36.454 "state": "online", 00:14:36.454 "raid_level": "raid1", 00:14:36.454 "superblock": true, 00:14:36.454 "num_base_bdevs": 4, 00:14:36.454 "num_base_bdevs_discovered": 3, 00:14:36.454 "num_base_bdevs_operational": 3, 00:14:36.454 "base_bdevs_list": [ 00:14:36.454 { 00:14:36.454 "name": "spare", 00:14:36.454 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:36.454 "is_configured": true, 00:14:36.454 "data_offset": 2048, 00:14:36.454 "data_size": 63488 00:14:36.454 }, 00:14:36.454 { 00:14:36.454 "name": null, 00:14:36.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.454 "is_configured": false, 00:14:36.454 "data_offset": 2048, 00:14:36.454 "data_size": 63488 00:14:36.454 }, 00:14:36.454 { 00:14:36.454 "name": "BaseBdev3", 00:14:36.454 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:36.454 "is_configured": true, 00:14:36.454 "data_offset": 2048, 00:14:36.454 "data_size": 63488 00:14:36.454 }, 00:14:36.454 { 00:14:36.454 "name": "BaseBdev4", 00:14:36.454 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:36.454 "is_configured": true, 00:14:36.454 "data_offset": 2048, 00:14:36.454 "data_size": 63488 00:14:36.454 } 00:14:36.454 ] 00:14:36.454 }' 00:14:36.454 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.454 17:31:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.714 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.714 "name": "raid_bdev1", 00:14:36.714 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:36.714 "strip_size_kb": 0, 00:14:36.714 "state": "online", 00:14:36.714 "raid_level": "raid1", 00:14:36.714 "superblock": true, 00:14:36.714 "num_base_bdevs": 4, 00:14:36.714 "num_base_bdevs_discovered": 3, 00:14:36.714 "num_base_bdevs_operational": 3, 00:14:36.714 "base_bdevs_list": [ 00:14:36.714 { 00:14:36.714 "name": "spare", 00:14:36.714 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:36.714 "is_configured": true, 00:14:36.714 "data_offset": 2048, 00:14:36.714 "data_size": 63488 00:14:36.714 }, 00:14:36.714 { 00:14:36.714 "name": null, 00:14:36.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.714 "is_configured": false, 00:14:36.714 "data_offset": 2048, 00:14:36.714 "data_size": 63488 00:14:36.714 }, 00:14:36.714 { 00:14:36.714 "name": "BaseBdev3", 00:14:36.714 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:36.714 "is_configured": true, 00:14:36.714 "data_offset": 2048, 00:14:36.714 "data_size": 63488 00:14:36.714 }, 00:14:36.714 { 00:14:36.714 "name": "BaseBdev4", 00:14:36.714 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:36.714 "is_configured": true, 00:14:36.714 "data_offset": 2048, 00:14:36.715 "data_size": 63488 00:14:36.715 } 00:14:36.715 ] 00:14:36.715 }' 00:14:36.715 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.975 [2024-12-07 17:31:10.204464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.975 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.975 "name": "raid_bdev1", 00:14:36.975 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:36.975 "strip_size_kb": 0, 00:14:36.975 "state": "online", 00:14:36.975 "raid_level": "raid1", 00:14:36.975 "superblock": true, 00:14:36.975 "num_base_bdevs": 4, 00:14:36.975 "num_base_bdevs_discovered": 2, 00:14:36.975 "num_base_bdevs_operational": 2, 00:14:36.975 "base_bdevs_list": [ 00:14:36.975 { 00:14:36.975 "name": null, 00:14:36.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.975 "is_configured": false, 00:14:36.975 "data_offset": 0, 00:14:36.975 "data_size": 63488 00:14:36.975 }, 00:14:36.975 { 00:14:36.975 "name": null, 00:14:36.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.975 "is_configured": false, 00:14:36.975 "data_offset": 2048, 00:14:36.975 "data_size": 63488 00:14:36.975 }, 00:14:36.975 { 00:14:36.975 "name": "BaseBdev3", 00:14:36.975 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:36.975 "is_configured": true, 00:14:36.975 "data_offset": 2048, 00:14:36.975 "data_size": 63488 00:14:36.975 }, 00:14:36.975 { 00:14:36.975 "name": "BaseBdev4", 00:14:36.975 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:36.975 "is_configured": true, 00:14:36.975 "data_offset": 2048, 00:14:36.975 "data_size": 63488 00:14:36.975 } 00:14:36.975 ] 00:14:36.976 }' 00:14:36.976 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.976 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.547 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:37.547 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.547 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.547 [2024-12-07 17:31:10.687612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.547 [2024-12-07 17:31:10.687973] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:37.547 [2024-12-07 17:31:10.688041] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:37.547 [2024-12-07 17:31:10.688114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.547 [2024-12-07 17:31:10.702684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:37.547 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.547 17:31:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:37.547 [2024-12-07 17:31:10.704824] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.486 "name": "raid_bdev1", 00:14:38.486 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:38.486 "strip_size_kb": 0, 00:14:38.486 "state": "online", 00:14:38.486 "raid_level": "raid1", 00:14:38.486 "superblock": true, 00:14:38.486 "num_base_bdevs": 4, 00:14:38.486 "num_base_bdevs_discovered": 3, 00:14:38.486 "num_base_bdevs_operational": 3, 00:14:38.486 "process": { 00:14:38.486 "type": "rebuild", 00:14:38.486 "target": "spare", 00:14:38.486 "progress": { 00:14:38.486 "blocks": 20480, 00:14:38.486 "percent": 32 00:14:38.486 } 00:14:38.486 }, 00:14:38.486 "base_bdevs_list": [ 00:14:38.486 { 00:14:38.486 "name": "spare", 00:14:38.486 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:38.486 "is_configured": true, 00:14:38.486 "data_offset": 2048, 00:14:38.486 "data_size": 63488 00:14:38.486 }, 00:14:38.486 { 00:14:38.486 "name": null, 00:14:38.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.486 "is_configured": false, 00:14:38.486 "data_offset": 2048, 00:14:38.486 "data_size": 63488 00:14:38.486 }, 00:14:38.486 { 00:14:38.486 "name": "BaseBdev3", 00:14:38.486 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:38.486 "is_configured": true, 00:14:38.486 "data_offset": 2048, 00:14:38.486 "data_size": 63488 00:14:38.486 }, 00:14:38.486 { 00:14:38.486 "name": "BaseBdev4", 00:14:38.486 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:38.486 "is_configured": true, 00:14:38.486 "data_offset": 2048, 00:14:38.486 "data_size": 63488 00:14:38.486 } 00:14:38.486 ] 00:14:38.486 }' 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.486 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.486 [2024-12-07 17:31:11.864701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.745 [2024-12-07 17:31:11.913374] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:38.746 [2024-12-07 17:31:11.913439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.746 [2024-12-07 17:31:11.913462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.746 [2024-12-07 17:31:11.913471] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.746 "name": "raid_bdev1", 00:14:38.746 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:38.746 "strip_size_kb": 0, 00:14:38.746 "state": "online", 00:14:38.746 "raid_level": "raid1", 00:14:38.746 "superblock": true, 00:14:38.746 "num_base_bdevs": 4, 00:14:38.746 "num_base_bdevs_discovered": 2, 00:14:38.746 "num_base_bdevs_operational": 2, 00:14:38.746 "base_bdevs_list": [ 00:14:38.746 { 00:14:38.746 "name": null, 00:14:38.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.746 "is_configured": false, 00:14:38.746 "data_offset": 0, 00:14:38.746 "data_size": 63488 00:14:38.746 }, 00:14:38.746 { 00:14:38.746 "name": null, 00:14:38.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.746 "is_configured": false, 00:14:38.746 "data_offset": 2048, 00:14:38.746 "data_size": 63488 00:14:38.746 }, 00:14:38.746 { 00:14:38.746 "name": "BaseBdev3", 00:14:38.746 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:38.746 "is_configured": true, 00:14:38.746 "data_offset": 2048, 00:14:38.746 "data_size": 63488 00:14:38.746 }, 00:14:38.746 { 00:14:38.746 "name": "BaseBdev4", 00:14:38.746 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:38.746 "is_configured": true, 00:14:38.746 "data_offset": 2048, 00:14:38.746 "data_size": 63488 00:14:38.746 } 00:14:38.746 ] 00:14:38.746 }' 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.746 17:31:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.022 17:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:39.022 17:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.022 17:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.022 [2024-12-07 17:31:12.365202] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:39.022 [2024-12-07 17:31:12.365268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.023 [2024-12-07 17:31:12.365308] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:39.023 [2024-12-07 17:31:12.365320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.023 [2024-12-07 17:31:12.365847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.023 [2024-12-07 17:31:12.365867] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:39.023 [2024-12-07 17:31:12.365984] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:39.023 [2024-12-07 17:31:12.365999] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:39.023 [2024-12-07 17:31:12.366015] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:39.023 [2024-12-07 17:31:12.366040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.023 [2024-12-07 17:31:12.379262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:39.023 spare 00:14:39.023 17:31:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.023 17:31:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:39.023 [2024-12-07 17:31:12.381439] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.406 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.406 "name": "raid_bdev1", 00:14:40.406 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:40.406 "strip_size_kb": 0, 00:14:40.406 "state": "online", 00:14:40.406 "raid_level": "raid1", 00:14:40.406 "superblock": true, 00:14:40.406 "num_base_bdevs": 4, 00:14:40.406 "num_base_bdevs_discovered": 3, 00:14:40.406 "num_base_bdevs_operational": 3, 00:14:40.406 "process": { 00:14:40.406 "type": "rebuild", 00:14:40.406 "target": "spare", 00:14:40.406 "progress": { 00:14:40.406 "blocks": 20480, 00:14:40.406 "percent": 32 00:14:40.406 } 00:14:40.406 }, 00:14:40.406 "base_bdevs_list": [ 00:14:40.406 { 00:14:40.406 "name": "spare", 00:14:40.406 "uuid": "f172ecfe-59b6-52db-855f-8d42c449433d", 00:14:40.406 "is_configured": true, 00:14:40.406 "data_offset": 2048, 00:14:40.406 "data_size": 63488 00:14:40.406 }, 00:14:40.406 { 00:14:40.406 "name": null, 00:14:40.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.406 "is_configured": false, 00:14:40.406 "data_offset": 2048, 00:14:40.406 "data_size": 63488 00:14:40.406 }, 00:14:40.406 { 00:14:40.406 "name": "BaseBdev3", 00:14:40.406 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:40.407 "is_configured": true, 00:14:40.407 "data_offset": 2048, 00:14:40.407 "data_size": 63488 00:14:40.407 }, 00:14:40.407 { 00:14:40.407 "name": "BaseBdev4", 00:14:40.407 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:40.407 "is_configured": true, 00:14:40.407 "data_offset": 2048, 00:14:40.407 "data_size": 63488 00:14:40.407 } 00:14:40.407 ] 00:14:40.407 }' 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.407 [2024-12-07 17:31:13.538135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.407 [2024-12-07 17:31:13.589992] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:40.407 [2024-12-07 17:31:13.590111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.407 [2024-12-07 17:31:13.590154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.407 [2024-12-07 17:31:13.590181] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.407 "name": "raid_bdev1", 00:14:40.407 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:40.407 "strip_size_kb": 0, 00:14:40.407 "state": "online", 00:14:40.407 "raid_level": "raid1", 00:14:40.407 "superblock": true, 00:14:40.407 "num_base_bdevs": 4, 00:14:40.407 "num_base_bdevs_discovered": 2, 00:14:40.407 "num_base_bdevs_operational": 2, 00:14:40.407 "base_bdevs_list": [ 00:14:40.407 { 00:14:40.407 "name": null, 00:14:40.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.407 "is_configured": false, 00:14:40.407 "data_offset": 0, 00:14:40.407 "data_size": 63488 00:14:40.407 }, 00:14:40.407 { 00:14:40.407 "name": null, 00:14:40.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.407 "is_configured": false, 00:14:40.407 "data_offset": 2048, 00:14:40.407 "data_size": 63488 00:14:40.407 }, 00:14:40.407 { 00:14:40.407 "name": "BaseBdev3", 00:14:40.407 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:40.407 "is_configured": true, 00:14:40.407 "data_offset": 2048, 00:14:40.407 "data_size": 63488 00:14:40.407 }, 00:14:40.407 { 00:14:40.407 "name": "BaseBdev4", 00:14:40.407 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:40.407 "is_configured": true, 00:14:40.407 "data_offset": 2048, 00:14:40.407 "data_size": 63488 00:14:40.407 } 00:14:40.407 ] 00:14:40.407 }' 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.407 17:31:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.667 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.667 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.667 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.667 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.667 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.667 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.667 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.667 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.667 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.667 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.927 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.927 "name": "raid_bdev1", 00:14:40.927 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:40.927 "strip_size_kb": 0, 00:14:40.927 "state": "online", 00:14:40.927 "raid_level": "raid1", 00:14:40.927 "superblock": true, 00:14:40.927 "num_base_bdevs": 4, 00:14:40.927 "num_base_bdevs_discovered": 2, 00:14:40.927 "num_base_bdevs_operational": 2, 00:14:40.927 "base_bdevs_list": [ 00:14:40.927 { 00:14:40.927 "name": null, 00:14:40.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.927 "is_configured": false, 00:14:40.927 "data_offset": 0, 00:14:40.927 "data_size": 63488 00:14:40.927 }, 00:14:40.927 { 00:14:40.928 "name": null, 00:14:40.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.928 "is_configured": false, 00:14:40.928 "data_offset": 2048, 00:14:40.928 "data_size": 63488 00:14:40.928 }, 00:14:40.928 { 00:14:40.928 "name": "BaseBdev3", 00:14:40.928 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:40.928 "is_configured": true, 00:14:40.928 "data_offset": 2048, 00:14:40.928 "data_size": 63488 00:14:40.928 }, 00:14:40.928 { 00:14:40.928 "name": "BaseBdev4", 00:14:40.928 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:40.928 "is_configured": true, 00:14:40.928 "data_offset": 2048, 00:14:40.928 "data_size": 63488 00:14:40.928 } 00:14:40.928 ] 00:14:40.928 }' 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.928 [2024-12-07 17:31:14.181679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:40.928 [2024-12-07 17:31:14.181750] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.928 [2024-12-07 17:31:14.181776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:40.928 [2024-12-07 17:31:14.181790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.928 [2024-12-07 17:31:14.182352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.928 [2024-12-07 17:31:14.182386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:40.928 [2024-12-07 17:31:14.182476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:40.928 [2024-12-07 17:31:14.182496] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:40.928 [2024-12-07 17:31:14.182505] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:40.928 [2024-12-07 17:31:14.182519] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:40.928 BaseBdev1 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.928 17:31:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.867 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.867 "name": "raid_bdev1", 00:14:41.867 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:41.867 "strip_size_kb": 0, 00:14:41.867 "state": "online", 00:14:41.867 "raid_level": "raid1", 00:14:41.867 "superblock": true, 00:14:41.867 "num_base_bdevs": 4, 00:14:41.867 "num_base_bdevs_discovered": 2, 00:14:41.867 "num_base_bdevs_operational": 2, 00:14:41.867 "base_bdevs_list": [ 00:14:41.867 { 00:14:41.867 "name": null, 00:14:41.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.867 "is_configured": false, 00:14:41.867 "data_offset": 0, 00:14:41.867 "data_size": 63488 00:14:41.867 }, 00:14:41.867 { 00:14:41.867 "name": null, 00:14:41.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.867 "is_configured": false, 00:14:41.868 "data_offset": 2048, 00:14:41.868 "data_size": 63488 00:14:41.868 }, 00:14:41.868 { 00:14:41.868 "name": "BaseBdev3", 00:14:41.868 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:41.868 "is_configured": true, 00:14:41.868 "data_offset": 2048, 00:14:41.868 "data_size": 63488 00:14:41.868 }, 00:14:41.868 { 00:14:41.868 "name": "BaseBdev4", 00:14:41.868 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:41.868 "is_configured": true, 00:14:41.868 "data_offset": 2048, 00:14:41.868 "data_size": 63488 00:14:41.868 } 00:14:41.868 ] 00:14:41.868 }' 00:14:41.868 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.868 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.435 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.435 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.435 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.435 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.435 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.435 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.436 "name": "raid_bdev1", 00:14:42.436 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:42.436 "strip_size_kb": 0, 00:14:42.436 "state": "online", 00:14:42.436 "raid_level": "raid1", 00:14:42.436 "superblock": true, 00:14:42.436 "num_base_bdevs": 4, 00:14:42.436 "num_base_bdevs_discovered": 2, 00:14:42.436 "num_base_bdevs_operational": 2, 00:14:42.436 "base_bdevs_list": [ 00:14:42.436 { 00:14:42.436 "name": null, 00:14:42.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.436 "is_configured": false, 00:14:42.436 "data_offset": 0, 00:14:42.436 "data_size": 63488 00:14:42.436 }, 00:14:42.436 { 00:14:42.436 "name": null, 00:14:42.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.436 "is_configured": false, 00:14:42.436 "data_offset": 2048, 00:14:42.436 "data_size": 63488 00:14:42.436 }, 00:14:42.436 { 00:14:42.436 "name": "BaseBdev3", 00:14:42.436 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:42.436 "is_configured": true, 00:14:42.436 "data_offset": 2048, 00:14:42.436 "data_size": 63488 00:14:42.436 }, 00:14:42.436 { 00:14:42.436 "name": "BaseBdev4", 00:14:42.436 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:42.436 "is_configured": true, 00:14:42.436 "data_offset": 2048, 00:14:42.436 "data_size": 63488 00:14:42.436 } 00:14:42.436 ] 00:14:42.436 }' 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.436 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.436 [2024-12-07 17:31:15.815523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.695 [2024-12-07 17:31:15.815722] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:42.695 [2024-12-07 17:31:15.815739] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:42.695 request: 00:14:42.695 { 00:14:42.695 "base_bdev": "BaseBdev1", 00:14:42.695 "raid_bdev": "raid_bdev1", 00:14:42.695 "method": "bdev_raid_add_base_bdev", 00:14:42.695 "req_id": 1 00:14:42.695 } 00:14:42.695 Got JSON-RPC error response 00:14:42.695 response: 00:14:42.695 { 00:14:42.695 "code": -22, 00:14:42.695 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:42.695 } 00:14:42.695 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:42.695 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:42.695 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:42.695 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:42.695 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:42.695 17:31:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.633 "name": "raid_bdev1", 00:14:43.633 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:43.633 "strip_size_kb": 0, 00:14:43.633 "state": "online", 00:14:43.633 "raid_level": "raid1", 00:14:43.633 "superblock": true, 00:14:43.633 "num_base_bdevs": 4, 00:14:43.633 "num_base_bdevs_discovered": 2, 00:14:43.633 "num_base_bdevs_operational": 2, 00:14:43.633 "base_bdevs_list": [ 00:14:43.633 { 00:14:43.633 "name": null, 00:14:43.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.633 "is_configured": false, 00:14:43.633 "data_offset": 0, 00:14:43.633 "data_size": 63488 00:14:43.633 }, 00:14:43.633 { 00:14:43.633 "name": null, 00:14:43.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.633 "is_configured": false, 00:14:43.633 "data_offset": 2048, 00:14:43.633 "data_size": 63488 00:14:43.633 }, 00:14:43.633 { 00:14:43.633 "name": "BaseBdev3", 00:14:43.633 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:43.633 "is_configured": true, 00:14:43.633 "data_offset": 2048, 00:14:43.633 "data_size": 63488 00:14:43.633 }, 00:14:43.633 { 00:14:43.633 "name": "BaseBdev4", 00:14:43.633 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:43.633 "is_configured": true, 00:14:43.633 "data_offset": 2048, 00:14:43.633 "data_size": 63488 00:14:43.633 } 00:14:43.633 ] 00:14:43.633 }' 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.633 17:31:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.203 "name": "raid_bdev1", 00:14:44.203 "uuid": "b54b6629-35c9-48e1-903a-6151b5737927", 00:14:44.203 "strip_size_kb": 0, 00:14:44.203 "state": "online", 00:14:44.203 "raid_level": "raid1", 00:14:44.203 "superblock": true, 00:14:44.203 "num_base_bdevs": 4, 00:14:44.203 "num_base_bdevs_discovered": 2, 00:14:44.203 "num_base_bdevs_operational": 2, 00:14:44.203 "base_bdevs_list": [ 00:14:44.203 { 00:14:44.203 "name": null, 00:14:44.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.203 "is_configured": false, 00:14:44.203 "data_offset": 0, 00:14:44.203 "data_size": 63488 00:14:44.203 }, 00:14:44.203 { 00:14:44.203 "name": null, 00:14:44.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.203 "is_configured": false, 00:14:44.203 "data_offset": 2048, 00:14:44.203 "data_size": 63488 00:14:44.203 }, 00:14:44.203 { 00:14:44.203 "name": "BaseBdev3", 00:14:44.203 "uuid": "354ac570-9b14-5eae-9c59-2b5bc3f2c068", 00:14:44.203 "is_configured": true, 00:14:44.203 "data_offset": 2048, 00:14:44.203 "data_size": 63488 00:14:44.203 }, 00:14:44.203 { 00:14:44.203 "name": "BaseBdev4", 00:14:44.203 "uuid": "4bd478bc-dcfc-554a-b531-f9150c0457e4", 00:14:44.203 "is_configured": true, 00:14:44.203 "data_offset": 2048, 00:14:44.203 "data_size": 63488 00:14:44.203 } 00:14:44.203 ] 00:14:44.203 }' 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79156 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79156 ']' 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79156 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79156 00:14:44.203 killing process with pid 79156 00:14:44.203 Received shutdown signal, test time was about 18.039362 seconds 00:14:44.203 00:14:44.203 Latency(us) 00:14:44.203 [2024-12-07T17:31:17.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.203 [2024-12-07T17:31:17.585Z] =================================================================================================================== 00:14:44.203 [2024-12-07T17:31:17.585Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79156' 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79156 00:14:44.203 [2024-12-07 17:31:17.446674] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.203 [2024-12-07 17:31:17.446781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.203 17:31:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79156 00:14:44.203 [2024-12-07 17:31:17.446848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.203 [2024-12-07 17:31:17.446860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:44.774 [2024-12-07 17:31:17.877273] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.156 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:46.156 00:14:46.156 real 0m21.534s 00:14:46.156 user 0m27.941s 00:14:46.156 sys 0m2.610s 00:14:46.156 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.156 ************************************ 00:14:46.156 END TEST raid_rebuild_test_sb_io 00:14:46.156 ************************************ 00:14:46.156 17:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.156 17:31:19 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:46.156 17:31:19 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:46.156 17:31:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:46.156 17:31:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.156 17:31:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.156 ************************************ 00:14:46.156 START TEST raid5f_state_function_test 00:14:46.156 ************************************ 00:14:46.156 17:31:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:46.156 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:46.156 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:46.157 Process raid pid: 79879 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79879 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79879' 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79879 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79879 ']' 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.157 17:31:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.157 [2024-12-07 17:31:19.301861] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:46.157 [2024-12-07 17:31:19.302081] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.157 [2024-12-07 17:31:19.476380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.417 [2024-12-07 17:31:19.605181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.677 [2024-12-07 17:31:19.845718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.677 [2024-12-07 17:31:19.845769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.937 [2024-12-07 17:31:20.121203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.937 [2024-12-07 17:31:20.121285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.937 [2024-12-07 17:31:20.121297] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.937 [2024-12-07 17:31:20.121308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.937 [2024-12-07 17:31:20.121315] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:46.937 [2024-12-07 17:31:20.121327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.937 "name": "Existed_Raid", 00:14:46.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.937 "strip_size_kb": 64, 00:14:46.937 "state": "configuring", 00:14:46.937 "raid_level": "raid5f", 00:14:46.937 "superblock": false, 00:14:46.937 "num_base_bdevs": 3, 00:14:46.937 "num_base_bdevs_discovered": 0, 00:14:46.937 "num_base_bdevs_operational": 3, 00:14:46.937 "base_bdevs_list": [ 00:14:46.937 { 00:14:46.937 "name": "BaseBdev1", 00:14:46.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.937 "is_configured": false, 00:14:46.937 "data_offset": 0, 00:14:46.937 "data_size": 0 00:14:46.937 }, 00:14:46.937 { 00:14:46.937 "name": "BaseBdev2", 00:14:46.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.937 "is_configured": false, 00:14:46.937 "data_offset": 0, 00:14:46.937 "data_size": 0 00:14:46.937 }, 00:14:46.937 { 00:14:46.937 "name": "BaseBdev3", 00:14:46.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.937 "is_configured": false, 00:14:46.937 "data_offset": 0, 00:14:46.937 "data_size": 0 00:14:46.937 } 00:14:46.937 ] 00:14:46.937 }' 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.937 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.198 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:47.198 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.198 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.198 [2024-12-07 17:31:20.560400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.198 [2024-12-07 17:31:20.560523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:47.198 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.198 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:47.198 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.198 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.198 [2024-12-07 17:31:20.572378] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.198 [2024-12-07 17:31:20.572477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.198 [2024-12-07 17:31:20.572507] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.198 [2024-12-07 17:31:20.572534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.198 [2024-12-07 17:31:20.572555] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.198 [2024-12-07 17:31:20.572580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.198 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.198 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:47.198 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.459 [2024-12-07 17:31:20.621689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.459 BaseBdev1 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.459 [ 00:14:47.459 { 00:14:47.459 "name": "BaseBdev1", 00:14:47.459 "aliases": [ 00:14:47.459 "2aae777b-e569-47ca-9603-4fa8f0ed87af" 00:14:47.459 ], 00:14:47.459 "product_name": "Malloc disk", 00:14:47.459 "block_size": 512, 00:14:47.459 "num_blocks": 65536, 00:14:47.459 "uuid": "2aae777b-e569-47ca-9603-4fa8f0ed87af", 00:14:47.459 "assigned_rate_limits": { 00:14:47.459 "rw_ios_per_sec": 0, 00:14:47.459 "rw_mbytes_per_sec": 0, 00:14:47.459 "r_mbytes_per_sec": 0, 00:14:47.459 "w_mbytes_per_sec": 0 00:14:47.459 }, 00:14:47.459 "claimed": true, 00:14:47.459 "claim_type": "exclusive_write", 00:14:47.459 "zoned": false, 00:14:47.459 "supported_io_types": { 00:14:47.459 "read": true, 00:14:47.459 "write": true, 00:14:47.459 "unmap": true, 00:14:47.459 "flush": true, 00:14:47.459 "reset": true, 00:14:47.459 "nvme_admin": false, 00:14:47.459 "nvme_io": false, 00:14:47.459 "nvme_io_md": false, 00:14:47.459 "write_zeroes": true, 00:14:47.459 "zcopy": true, 00:14:47.459 "get_zone_info": false, 00:14:47.459 "zone_management": false, 00:14:47.459 "zone_append": false, 00:14:47.459 "compare": false, 00:14:47.459 "compare_and_write": false, 00:14:47.459 "abort": true, 00:14:47.459 "seek_hole": false, 00:14:47.459 "seek_data": false, 00:14:47.459 "copy": true, 00:14:47.459 "nvme_iov_md": false 00:14:47.459 }, 00:14:47.459 "memory_domains": [ 00:14:47.459 { 00:14:47.459 "dma_device_id": "system", 00:14:47.459 "dma_device_type": 1 00:14:47.459 }, 00:14:47.459 { 00:14:47.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.459 "dma_device_type": 2 00:14:47.459 } 00:14:47.459 ], 00:14:47.459 "driver_specific": {} 00:14:47.459 } 00:14:47.459 ] 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.459 "name": "Existed_Raid", 00:14:47.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.459 "strip_size_kb": 64, 00:14:47.459 "state": "configuring", 00:14:47.459 "raid_level": "raid5f", 00:14:47.459 "superblock": false, 00:14:47.459 "num_base_bdevs": 3, 00:14:47.459 "num_base_bdevs_discovered": 1, 00:14:47.459 "num_base_bdevs_operational": 3, 00:14:47.459 "base_bdevs_list": [ 00:14:47.459 { 00:14:47.459 "name": "BaseBdev1", 00:14:47.459 "uuid": "2aae777b-e569-47ca-9603-4fa8f0ed87af", 00:14:47.459 "is_configured": true, 00:14:47.459 "data_offset": 0, 00:14:47.459 "data_size": 65536 00:14:47.459 }, 00:14:47.459 { 00:14:47.459 "name": "BaseBdev2", 00:14:47.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.459 "is_configured": false, 00:14:47.459 "data_offset": 0, 00:14:47.459 "data_size": 0 00:14:47.459 }, 00:14:47.459 { 00:14:47.459 "name": "BaseBdev3", 00:14:47.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.459 "is_configured": false, 00:14:47.459 "data_offset": 0, 00:14:47.459 "data_size": 0 00:14:47.459 } 00:14:47.459 ] 00:14:47.459 }' 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.459 17:31:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.719 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:47.719 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.719 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.719 [2024-12-07 17:31:21.084996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.719 [2024-12-07 17:31:21.085086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:47.719 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.719 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:47.719 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.719 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.719 [2024-12-07 17:31:21.093039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.719 [2024-12-07 17:31:21.094970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.719 [2024-12-07 17:31:21.095058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.719 [2024-12-07 17:31:21.095075] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.719 [2024-12-07 17:31:21.095087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.979 "name": "Existed_Raid", 00:14:47.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.979 "strip_size_kb": 64, 00:14:47.979 "state": "configuring", 00:14:47.979 "raid_level": "raid5f", 00:14:47.979 "superblock": false, 00:14:47.979 "num_base_bdevs": 3, 00:14:47.979 "num_base_bdevs_discovered": 1, 00:14:47.979 "num_base_bdevs_operational": 3, 00:14:47.979 "base_bdevs_list": [ 00:14:47.979 { 00:14:47.979 "name": "BaseBdev1", 00:14:47.979 "uuid": "2aae777b-e569-47ca-9603-4fa8f0ed87af", 00:14:47.979 "is_configured": true, 00:14:47.979 "data_offset": 0, 00:14:47.979 "data_size": 65536 00:14:47.979 }, 00:14:47.979 { 00:14:47.979 "name": "BaseBdev2", 00:14:47.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.979 "is_configured": false, 00:14:47.979 "data_offset": 0, 00:14:47.979 "data_size": 0 00:14:47.979 }, 00:14:47.979 { 00:14:47.979 "name": "BaseBdev3", 00:14:47.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.979 "is_configured": false, 00:14:47.979 "data_offset": 0, 00:14:47.979 "data_size": 0 00:14:47.979 } 00:14:47.979 ] 00:14:47.979 }' 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.979 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.240 [2024-12-07 17:31:21.558374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.240 BaseBdev2 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.240 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.240 [ 00:14:48.240 { 00:14:48.240 "name": "BaseBdev2", 00:14:48.240 "aliases": [ 00:14:48.240 "2bb55a1f-59b5-4c27-871a-507adbde1fdc" 00:14:48.240 ], 00:14:48.240 "product_name": "Malloc disk", 00:14:48.240 "block_size": 512, 00:14:48.240 "num_blocks": 65536, 00:14:48.240 "uuid": "2bb55a1f-59b5-4c27-871a-507adbde1fdc", 00:14:48.240 "assigned_rate_limits": { 00:14:48.240 "rw_ios_per_sec": 0, 00:14:48.240 "rw_mbytes_per_sec": 0, 00:14:48.240 "r_mbytes_per_sec": 0, 00:14:48.240 "w_mbytes_per_sec": 0 00:14:48.240 }, 00:14:48.240 "claimed": true, 00:14:48.240 "claim_type": "exclusive_write", 00:14:48.240 "zoned": false, 00:14:48.240 "supported_io_types": { 00:14:48.240 "read": true, 00:14:48.240 "write": true, 00:14:48.240 "unmap": true, 00:14:48.240 "flush": true, 00:14:48.240 "reset": true, 00:14:48.240 "nvme_admin": false, 00:14:48.240 "nvme_io": false, 00:14:48.240 "nvme_io_md": false, 00:14:48.240 "write_zeroes": true, 00:14:48.240 "zcopy": true, 00:14:48.240 "get_zone_info": false, 00:14:48.240 "zone_management": false, 00:14:48.241 "zone_append": false, 00:14:48.241 "compare": false, 00:14:48.241 "compare_and_write": false, 00:14:48.241 "abort": true, 00:14:48.241 "seek_hole": false, 00:14:48.241 "seek_data": false, 00:14:48.241 "copy": true, 00:14:48.241 "nvme_iov_md": false 00:14:48.241 }, 00:14:48.241 "memory_domains": [ 00:14:48.241 { 00:14:48.241 "dma_device_id": "system", 00:14:48.241 "dma_device_type": 1 00:14:48.241 }, 00:14:48.241 { 00:14:48.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.241 "dma_device_type": 2 00:14:48.241 } 00:14:48.241 ], 00:14:48.241 "driver_specific": {} 00:14:48.241 } 00:14:48.241 ] 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.241 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.500 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.500 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.500 "name": "Existed_Raid", 00:14:48.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.500 "strip_size_kb": 64, 00:14:48.500 "state": "configuring", 00:14:48.500 "raid_level": "raid5f", 00:14:48.500 "superblock": false, 00:14:48.500 "num_base_bdevs": 3, 00:14:48.500 "num_base_bdevs_discovered": 2, 00:14:48.500 "num_base_bdevs_operational": 3, 00:14:48.500 "base_bdevs_list": [ 00:14:48.500 { 00:14:48.500 "name": "BaseBdev1", 00:14:48.500 "uuid": "2aae777b-e569-47ca-9603-4fa8f0ed87af", 00:14:48.500 "is_configured": true, 00:14:48.500 "data_offset": 0, 00:14:48.500 "data_size": 65536 00:14:48.500 }, 00:14:48.500 { 00:14:48.500 "name": "BaseBdev2", 00:14:48.500 "uuid": "2bb55a1f-59b5-4c27-871a-507adbde1fdc", 00:14:48.500 "is_configured": true, 00:14:48.500 "data_offset": 0, 00:14:48.500 "data_size": 65536 00:14:48.500 }, 00:14:48.500 { 00:14:48.500 "name": "BaseBdev3", 00:14:48.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.500 "is_configured": false, 00:14:48.500 "data_offset": 0, 00:14:48.500 "data_size": 0 00:14:48.501 } 00:14:48.501 ] 00:14:48.501 }' 00:14:48.501 17:31:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.501 17:31:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.760 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.760 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.760 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.760 [2024-12-07 17:31:22.111111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.760 [2024-12-07 17:31:22.111180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:48.761 [2024-12-07 17:31:22.111198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:48.761 [2024-12-07 17:31:22.111504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:48.761 [2024-12-07 17:31:22.116739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:48.761 [2024-12-07 17:31:22.116766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:48.761 [2024-12-07 17:31:22.117108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.761 BaseBdev3 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.761 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.021 [ 00:14:49.021 { 00:14:49.021 "name": "BaseBdev3", 00:14:49.021 "aliases": [ 00:14:49.021 "0cdaf0eb-8bc2-4796-b910-a3c36d08a250" 00:14:49.021 ], 00:14:49.021 "product_name": "Malloc disk", 00:14:49.021 "block_size": 512, 00:14:49.021 "num_blocks": 65536, 00:14:49.021 "uuid": "0cdaf0eb-8bc2-4796-b910-a3c36d08a250", 00:14:49.021 "assigned_rate_limits": { 00:14:49.021 "rw_ios_per_sec": 0, 00:14:49.021 "rw_mbytes_per_sec": 0, 00:14:49.021 "r_mbytes_per_sec": 0, 00:14:49.021 "w_mbytes_per_sec": 0 00:14:49.021 }, 00:14:49.021 "claimed": true, 00:14:49.021 "claim_type": "exclusive_write", 00:14:49.021 "zoned": false, 00:14:49.021 "supported_io_types": { 00:14:49.021 "read": true, 00:14:49.021 "write": true, 00:14:49.021 "unmap": true, 00:14:49.021 "flush": true, 00:14:49.021 "reset": true, 00:14:49.021 "nvme_admin": false, 00:14:49.021 "nvme_io": false, 00:14:49.021 "nvme_io_md": false, 00:14:49.021 "write_zeroes": true, 00:14:49.021 "zcopy": true, 00:14:49.021 "get_zone_info": false, 00:14:49.021 "zone_management": false, 00:14:49.021 "zone_append": false, 00:14:49.021 "compare": false, 00:14:49.021 "compare_and_write": false, 00:14:49.021 "abort": true, 00:14:49.021 "seek_hole": false, 00:14:49.021 "seek_data": false, 00:14:49.021 "copy": true, 00:14:49.021 "nvme_iov_md": false 00:14:49.021 }, 00:14:49.021 "memory_domains": [ 00:14:49.021 { 00:14:49.021 "dma_device_id": "system", 00:14:49.021 "dma_device_type": 1 00:14:49.021 }, 00:14:49.021 { 00:14:49.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.021 "dma_device_type": 2 00:14:49.021 } 00:14:49.021 ], 00:14:49.021 "driver_specific": {} 00:14:49.021 } 00:14:49.021 ] 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.021 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.021 "name": "Existed_Raid", 00:14:49.021 "uuid": "d5866271-b372-4f5a-b7ed-eb82f24e5381", 00:14:49.022 "strip_size_kb": 64, 00:14:49.022 "state": "online", 00:14:49.022 "raid_level": "raid5f", 00:14:49.022 "superblock": false, 00:14:49.022 "num_base_bdevs": 3, 00:14:49.022 "num_base_bdevs_discovered": 3, 00:14:49.022 "num_base_bdevs_operational": 3, 00:14:49.022 "base_bdevs_list": [ 00:14:49.022 { 00:14:49.022 "name": "BaseBdev1", 00:14:49.022 "uuid": "2aae777b-e569-47ca-9603-4fa8f0ed87af", 00:14:49.022 "is_configured": true, 00:14:49.022 "data_offset": 0, 00:14:49.022 "data_size": 65536 00:14:49.022 }, 00:14:49.022 { 00:14:49.022 "name": "BaseBdev2", 00:14:49.022 "uuid": "2bb55a1f-59b5-4c27-871a-507adbde1fdc", 00:14:49.022 "is_configured": true, 00:14:49.022 "data_offset": 0, 00:14:49.022 "data_size": 65536 00:14:49.022 }, 00:14:49.022 { 00:14:49.022 "name": "BaseBdev3", 00:14:49.022 "uuid": "0cdaf0eb-8bc2-4796-b910-a3c36d08a250", 00:14:49.022 "is_configured": true, 00:14:49.022 "data_offset": 0, 00:14:49.022 "data_size": 65536 00:14:49.022 } 00:14:49.022 ] 00:14:49.022 }' 00:14:49.022 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.022 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.289 [2024-12-07 17:31:22.622985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.289 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:49.289 "name": "Existed_Raid", 00:14:49.289 "aliases": [ 00:14:49.289 "d5866271-b372-4f5a-b7ed-eb82f24e5381" 00:14:49.289 ], 00:14:49.289 "product_name": "Raid Volume", 00:14:49.289 "block_size": 512, 00:14:49.289 "num_blocks": 131072, 00:14:49.289 "uuid": "d5866271-b372-4f5a-b7ed-eb82f24e5381", 00:14:49.289 "assigned_rate_limits": { 00:14:49.289 "rw_ios_per_sec": 0, 00:14:49.289 "rw_mbytes_per_sec": 0, 00:14:49.289 "r_mbytes_per_sec": 0, 00:14:49.289 "w_mbytes_per_sec": 0 00:14:49.289 }, 00:14:49.289 "claimed": false, 00:14:49.289 "zoned": false, 00:14:49.289 "supported_io_types": { 00:14:49.289 "read": true, 00:14:49.289 "write": true, 00:14:49.289 "unmap": false, 00:14:49.289 "flush": false, 00:14:49.289 "reset": true, 00:14:49.289 "nvme_admin": false, 00:14:49.289 "nvme_io": false, 00:14:49.289 "nvme_io_md": false, 00:14:49.289 "write_zeroes": true, 00:14:49.289 "zcopy": false, 00:14:49.289 "get_zone_info": false, 00:14:49.289 "zone_management": false, 00:14:49.289 "zone_append": false, 00:14:49.289 "compare": false, 00:14:49.289 "compare_and_write": false, 00:14:49.289 "abort": false, 00:14:49.289 "seek_hole": false, 00:14:49.289 "seek_data": false, 00:14:49.289 "copy": false, 00:14:49.289 "nvme_iov_md": false 00:14:49.289 }, 00:14:49.289 "driver_specific": { 00:14:49.289 "raid": { 00:14:49.289 "uuid": "d5866271-b372-4f5a-b7ed-eb82f24e5381", 00:14:49.289 "strip_size_kb": 64, 00:14:49.289 "state": "online", 00:14:49.289 "raid_level": "raid5f", 00:14:49.289 "superblock": false, 00:14:49.289 "num_base_bdevs": 3, 00:14:49.289 "num_base_bdevs_discovered": 3, 00:14:49.289 "num_base_bdevs_operational": 3, 00:14:49.289 "base_bdevs_list": [ 00:14:49.289 { 00:14:49.289 "name": "BaseBdev1", 00:14:49.289 "uuid": "2aae777b-e569-47ca-9603-4fa8f0ed87af", 00:14:49.289 "is_configured": true, 00:14:49.289 "data_offset": 0, 00:14:49.289 "data_size": 65536 00:14:49.289 }, 00:14:49.289 { 00:14:49.290 "name": "BaseBdev2", 00:14:49.290 "uuid": "2bb55a1f-59b5-4c27-871a-507adbde1fdc", 00:14:49.290 "is_configured": true, 00:14:49.290 "data_offset": 0, 00:14:49.290 "data_size": 65536 00:14:49.290 }, 00:14:49.290 { 00:14:49.290 "name": "BaseBdev3", 00:14:49.290 "uuid": "0cdaf0eb-8bc2-4796-b910-a3c36d08a250", 00:14:49.290 "is_configured": true, 00:14:49.290 "data_offset": 0, 00:14:49.290 "data_size": 65536 00:14:49.290 } 00:14:49.290 ] 00:14:49.290 } 00:14:49.290 } 00:14:49.290 }' 00:14:49.290 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:49.567 BaseBdev2 00:14:49.567 BaseBdev3' 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:49.567 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.568 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.568 [2024-12-07 17:31:22.870392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.828 17:31:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.828 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.828 "name": "Existed_Raid", 00:14:49.828 "uuid": "d5866271-b372-4f5a-b7ed-eb82f24e5381", 00:14:49.828 "strip_size_kb": 64, 00:14:49.828 "state": "online", 00:14:49.828 "raid_level": "raid5f", 00:14:49.828 "superblock": false, 00:14:49.828 "num_base_bdevs": 3, 00:14:49.828 "num_base_bdevs_discovered": 2, 00:14:49.828 "num_base_bdevs_operational": 2, 00:14:49.828 "base_bdevs_list": [ 00:14:49.828 { 00:14:49.828 "name": null, 00:14:49.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.828 "is_configured": false, 00:14:49.828 "data_offset": 0, 00:14:49.828 "data_size": 65536 00:14:49.828 }, 00:14:49.828 { 00:14:49.828 "name": "BaseBdev2", 00:14:49.828 "uuid": "2bb55a1f-59b5-4c27-871a-507adbde1fdc", 00:14:49.828 "is_configured": true, 00:14:49.828 "data_offset": 0, 00:14:49.828 "data_size": 65536 00:14:49.828 }, 00:14:49.828 { 00:14:49.828 "name": "BaseBdev3", 00:14:49.828 "uuid": "0cdaf0eb-8bc2-4796-b910-a3c36d08a250", 00:14:49.828 "is_configured": true, 00:14:49.828 "data_offset": 0, 00:14:49.828 "data_size": 65536 00:14:49.828 } 00:14:49.828 ] 00:14:49.828 }' 00:14:49.828 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.828 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.087 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:50.087 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.087 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:50.087 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.087 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.087 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.087 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.087 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:50.087 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.088 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:50.088 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.088 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.347 [2024-12-07 17:31:23.472765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:50.347 [2024-12-07 17:31:23.472975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.347 [2024-12-07 17:31:23.572175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.347 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.347 [2024-12-07 17:31:23.628115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.347 [2024-12-07 17:31:23.628174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.608 BaseBdev2 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.608 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.609 [ 00:14:50.609 { 00:14:50.609 "name": "BaseBdev2", 00:14:50.609 "aliases": [ 00:14:50.609 "531915aa-b1c1-41ef-aed3-f0cebcf23ab1" 00:14:50.609 ], 00:14:50.609 "product_name": "Malloc disk", 00:14:50.609 "block_size": 512, 00:14:50.609 "num_blocks": 65536, 00:14:50.609 "uuid": "531915aa-b1c1-41ef-aed3-f0cebcf23ab1", 00:14:50.609 "assigned_rate_limits": { 00:14:50.609 "rw_ios_per_sec": 0, 00:14:50.609 "rw_mbytes_per_sec": 0, 00:14:50.609 "r_mbytes_per_sec": 0, 00:14:50.609 "w_mbytes_per_sec": 0 00:14:50.609 }, 00:14:50.609 "claimed": false, 00:14:50.609 "zoned": false, 00:14:50.609 "supported_io_types": { 00:14:50.609 "read": true, 00:14:50.609 "write": true, 00:14:50.609 "unmap": true, 00:14:50.609 "flush": true, 00:14:50.609 "reset": true, 00:14:50.609 "nvme_admin": false, 00:14:50.609 "nvme_io": false, 00:14:50.609 "nvme_io_md": false, 00:14:50.609 "write_zeroes": true, 00:14:50.609 "zcopy": true, 00:14:50.609 "get_zone_info": false, 00:14:50.609 "zone_management": false, 00:14:50.609 "zone_append": false, 00:14:50.609 "compare": false, 00:14:50.609 "compare_and_write": false, 00:14:50.609 "abort": true, 00:14:50.609 "seek_hole": false, 00:14:50.609 "seek_data": false, 00:14:50.609 "copy": true, 00:14:50.609 "nvme_iov_md": false 00:14:50.609 }, 00:14:50.609 "memory_domains": [ 00:14:50.609 { 00:14:50.609 "dma_device_id": "system", 00:14:50.609 "dma_device_type": 1 00:14:50.609 }, 00:14:50.609 { 00:14:50.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.609 "dma_device_type": 2 00:14:50.609 } 00:14:50.609 ], 00:14:50.609 "driver_specific": {} 00:14:50.609 } 00:14:50.609 ] 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.609 BaseBdev3 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.609 [ 00:14:50.609 { 00:14:50.609 "name": "BaseBdev3", 00:14:50.609 "aliases": [ 00:14:50.609 "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b" 00:14:50.609 ], 00:14:50.609 "product_name": "Malloc disk", 00:14:50.609 "block_size": 512, 00:14:50.609 "num_blocks": 65536, 00:14:50.609 "uuid": "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b", 00:14:50.609 "assigned_rate_limits": { 00:14:50.609 "rw_ios_per_sec": 0, 00:14:50.609 "rw_mbytes_per_sec": 0, 00:14:50.609 "r_mbytes_per_sec": 0, 00:14:50.609 "w_mbytes_per_sec": 0 00:14:50.609 }, 00:14:50.609 "claimed": false, 00:14:50.609 "zoned": false, 00:14:50.609 "supported_io_types": { 00:14:50.609 "read": true, 00:14:50.609 "write": true, 00:14:50.609 "unmap": true, 00:14:50.609 "flush": true, 00:14:50.609 "reset": true, 00:14:50.609 "nvme_admin": false, 00:14:50.609 "nvme_io": false, 00:14:50.609 "nvme_io_md": false, 00:14:50.609 "write_zeroes": true, 00:14:50.609 "zcopy": true, 00:14:50.609 "get_zone_info": false, 00:14:50.609 "zone_management": false, 00:14:50.609 "zone_append": false, 00:14:50.609 "compare": false, 00:14:50.609 "compare_and_write": false, 00:14:50.609 "abort": true, 00:14:50.609 "seek_hole": false, 00:14:50.609 "seek_data": false, 00:14:50.609 "copy": true, 00:14:50.609 "nvme_iov_md": false 00:14:50.609 }, 00:14:50.609 "memory_domains": [ 00:14:50.609 { 00:14:50.609 "dma_device_id": "system", 00:14:50.609 "dma_device_type": 1 00:14:50.609 }, 00:14:50.609 { 00:14:50.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.609 "dma_device_type": 2 00:14:50.609 } 00:14:50.609 ], 00:14:50.609 "driver_specific": {} 00:14:50.609 } 00:14:50.609 ] 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.609 [2024-12-07 17:31:23.954804] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.609 [2024-12-07 17:31:23.954870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.609 [2024-12-07 17:31:23.954894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.609 [2024-12-07 17:31:23.956904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.609 17:31:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.870 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.870 "name": "Existed_Raid", 00:14:50.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.870 "strip_size_kb": 64, 00:14:50.870 "state": "configuring", 00:14:50.870 "raid_level": "raid5f", 00:14:50.870 "superblock": false, 00:14:50.870 "num_base_bdevs": 3, 00:14:50.870 "num_base_bdevs_discovered": 2, 00:14:50.870 "num_base_bdevs_operational": 3, 00:14:50.870 "base_bdevs_list": [ 00:14:50.870 { 00:14:50.870 "name": "BaseBdev1", 00:14:50.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.870 "is_configured": false, 00:14:50.870 "data_offset": 0, 00:14:50.870 "data_size": 0 00:14:50.870 }, 00:14:50.870 { 00:14:50.870 "name": "BaseBdev2", 00:14:50.870 "uuid": "531915aa-b1c1-41ef-aed3-f0cebcf23ab1", 00:14:50.870 "is_configured": true, 00:14:50.870 "data_offset": 0, 00:14:50.870 "data_size": 65536 00:14:50.870 }, 00:14:50.870 { 00:14:50.870 "name": "BaseBdev3", 00:14:50.870 "uuid": "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b", 00:14:50.870 "is_configured": true, 00:14:50.870 "data_offset": 0, 00:14:50.870 "data_size": 65536 00:14:50.870 } 00:14:50.870 ] 00:14:50.870 }' 00:14:50.870 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.870 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.130 [2024-12-07 17:31:24.322132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.130 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.131 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.131 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.131 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.131 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.131 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.131 "name": "Existed_Raid", 00:14:51.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.131 "strip_size_kb": 64, 00:14:51.131 "state": "configuring", 00:14:51.131 "raid_level": "raid5f", 00:14:51.131 "superblock": false, 00:14:51.131 "num_base_bdevs": 3, 00:14:51.131 "num_base_bdevs_discovered": 1, 00:14:51.131 "num_base_bdevs_operational": 3, 00:14:51.131 "base_bdevs_list": [ 00:14:51.131 { 00:14:51.131 "name": "BaseBdev1", 00:14:51.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.131 "is_configured": false, 00:14:51.131 "data_offset": 0, 00:14:51.131 "data_size": 0 00:14:51.131 }, 00:14:51.131 { 00:14:51.131 "name": null, 00:14:51.131 "uuid": "531915aa-b1c1-41ef-aed3-f0cebcf23ab1", 00:14:51.131 "is_configured": false, 00:14:51.131 "data_offset": 0, 00:14:51.131 "data_size": 65536 00:14:51.131 }, 00:14:51.131 { 00:14:51.131 "name": "BaseBdev3", 00:14:51.131 "uuid": "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b", 00:14:51.131 "is_configured": true, 00:14:51.131 "data_offset": 0, 00:14:51.131 "data_size": 65536 00:14:51.131 } 00:14:51.131 ] 00:14:51.131 }' 00:14:51.131 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.131 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.391 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.391 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:51.391 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.391 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.391 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.650 [2024-12-07 17:31:24.822768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.650 BaseBdev1 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.650 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.651 [ 00:14:51.651 { 00:14:51.651 "name": "BaseBdev1", 00:14:51.651 "aliases": [ 00:14:51.651 "017a698b-f5fa-4033-9bd4-43e74a410c0f" 00:14:51.651 ], 00:14:51.651 "product_name": "Malloc disk", 00:14:51.651 "block_size": 512, 00:14:51.651 "num_blocks": 65536, 00:14:51.651 "uuid": "017a698b-f5fa-4033-9bd4-43e74a410c0f", 00:14:51.651 "assigned_rate_limits": { 00:14:51.651 "rw_ios_per_sec": 0, 00:14:51.651 "rw_mbytes_per_sec": 0, 00:14:51.651 "r_mbytes_per_sec": 0, 00:14:51.651 "w_mbytes_per_sec": 0 00:14:51.651 }, 00:14:51.651 "claimed": true, 00:14:51.651 "claim_type": "exclusive_write", 00:14:51.651 "zoned": false, 00:14:51.651 "supported_io_types": { 00:14:51.651 "read": true, 00:14:51.651 "write": true, 00:14:51.651 "unmap": true, 00:14:51.651 "flush": true, 00:14:51.651 "reset": true, 00:14:51.651 "nvme_admin": false, 00:14:51.651 "nvme_io": false, 00:14:51.651 "nvme_io_md": false, 00:14:51.651 "write_zeroes": true, 00:14:51.651 "zcopy": true, 00:14:51.651 "get_zone_info": false, 00:14:51.651 "zone_management": false, 00:14:51.651 "zone_append": false, 00:14:51.651 "compare": false, 00:14:51.651 "compare_and_write": false, 00:14:51.651 "abort": true, 00:14:51.651 "seek_hole": false, 00:14:51.651 "seek_data": false, 00:14:51.651 "copy": true, 00:14:51.651 "nvme_iov_md": false 00:14:51.651 }, 00:14:51.651 "memory_domains": [ 00:14:51.651 { 00:14:51.651 "dma_device_id": "system", 00:14:51.651 "dma_device_type": 1 00:14:51.651 }, 00:14:51.651 { 00:14:51.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.651 "dma_device_type": 2 00:14:51.651 } 00:14:51.651 ], 00:14:51.651 "driver_specific": {} 00:14:51.651 } 00:14:51.651 ] 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.651 "name": "Existed_Raid", 00:14:51.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.651 "strip_size_kb": 64, 00:14:51.651 "state": "configuring", 00:14:51.651 "raid_level": "raid5f", 00:14:51.651 "superblock": false, 00:14:51.651 "num_base_bdevs": 3, 00:14:51.651 "num_base_bdevs_discovered": 2, 00:14:51.651 "num_base_bdevs_operational": 3, 00:14:51.651 "base_bdevs_list": [ 00:14:51.651 { 00:14:51.651 "name": "BaseBdev1", 00:14:51.651 "uuid": "017a698b-f5fa-4033-9bd4-43e74a410c0f", 00:14:51.651 "is_configured": true, 00:14:51.651 "data_offset": 0, 00:14:51.651 "data_size": 65536 00:14:51.651 }, 00:14:51.651 { 00:14:51.651 "name": null, 00:14:51.651 "uuid": "531915aa-b1c1-41ef-aed3-f0cebcf23ab1", 00:14:51.651 "is_configured": false, 00:14:51.651 "data_offset": 0, 00:14:51.651 "data_size": 65536 00:14:51.651 }, 00:14:51.651 { 00:14:51.651 "name": "BaseBdev3", 00:14:51.651 "uuid": "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b", 00:14:51.651 "is_configured": true, 00:14:51.651 "data_offset": 0, 00:14:51.651 "data_size": 65536 00:14:51.651 } 00:14:51.651 ] 00:14:51.651 }' 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.651 17:31:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.218 [2024-12-07 17:31:25.337987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.218 "name": "Existed_Raid", 00:14:52.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.218 "strip_size_kb": 64, 00:14:52.218 "state": "configuring", 00:14:52.218 "raid_level": "raid5f", 00:14:52.218 "superblock": false, 00:14:52.218 "num_base_bdevs": 3, 00:14:52.218 "num_base_bdevs_discovered": 1, 00:14:52.218 "num_base_bdevs_operational": 3, 00:14:52.218 "base_bdevs_list": [ 00:14:52.218 { 00:14:52.218 "name": "BaseBdev1", 00:14:52.218 "uuid": "017a698b-f5fa-4033-9bd4-43e74a410c0f", 00:14:52.218 "is_configured": true, 00:14:52.218 "data_offset": 0, 00:14:52.218 "data_size": 65536 00:14:52.218 }, 00:14:52.218 { 00:14:52.218 "name": null, 00:14:52.218 "uuid": "531915aa-b1c1-41ef-aed3-f0cebcf23ab1", 00:14:52.218 "is_configured": false, 00:14:52.218 "data_offset": 0, 00:14:52.218 "data_size": 65536 00:14:52.218 }, 00:14:52.218 { 00:14:52.218 "name": null, 00:14:52.218 "uuid": "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b", 00:14:52.218 "is_configured": false, 00:14:52.218 "data_offset": 0, 00:14:52.218 "data_size": 65536 00:14:52.218 } 00:14:52.218 ] 00:14:52.218 }' 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.218 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.478 [2024-12-07 17:31:25.757266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.478 "name": "Existed_Raid", 00:14:52.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.478 "strip_size_kb": 64, 00:14:52.478 "state": "configuring", 00:14:52.478 "raid_level": "raid5f", 00:14:52.478 "superblock": false, 00:14:52.478 "num_base_bdevs": 3, 00:14:52.478 "num_base_bdevs_discovered": 2, 00:14:52.478 "num_base_bdevs_operational": 3, 00:14:52.478 "base_bdevs_list": [ 00:14:52.478 { 00:14:52.478 "name": "BaseBdev1", 00:14:52.478 "uuid": "017a698b-f5fa-4033-9bd4-43e74a410c0f", 00:14:52.478 "is_configured": true, 00:14:52.478 "data_offset": 0, 00:14:52.478 "data_size": 65536 00:14:52.478 }, 00:14:52.478 { 00:14:52.478 "name": null, 00:14:52.478 "uuid": "531915aa-b1c1-41ef-aed3-f0cebcf23ab1", 00:14:52.478 "is_configured": false, 00:14:52.478 "data_offset": 0, 00:14:52.478 "data_size": 65536 00:14:52.478 }, 00:14:52.478 { 00:14:52.478 "name": "BaseBdev3", 00:14:52.478 "uuid": "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b", 00:14:52.478 "is_configured": true, 00:14:52.478 "data_offset": 0, 00:14:52.478 "data_size": 65536 00:14:52.478 } 00:14:52.478 ] 00:14:52.478 }' 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.478 17:31:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.046 [2024-12-07 17:31:26.200574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.046 "name": "Existed_Raid", 00:14:53.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.046 "strip_size_kb": 64, 00:14:53.046 "state": "configuring", 00:14:53.046 "raid_level": "raid5f", 00:14:53.046 "superblock": false, 00:14:53.046 "num_base_bdevs": 3, 00:14:53.046 "num_base_bdevs_discovered": 1, 00:14:53.046 "num_base_bdevs_operational": 3, 00:14:53.046 "base_bdevs_list": [ 00:14:53.046 { 00:14:53.046 "name": null, 00:14:53.046 "uuid": "017a698b-f5fa-4033-9bd4-43e74a410c0f", 00:14:53.046 "is_configured": false, 00:14:53.046 "data_offset": 0, 00:14:53.046 "data_size": 65536 00:14:53.046 }, 00:14:53.046 { 00:14:53.046 "name": null, 00:14:53.046 "uuid": "531915aa-b1c1-41ef-aed3-f0cebcf23ab1", 00:14:53.046 "is_configured": false, 00:14:53.046 "data_offset": 0, 00:14:53.046 "data_size": 65536 00:14:53.046 }, 00:14:53.046 { 00:14:53.046 "name": "BaseBdev3", 00:14:53.046 "uuid": "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b", 00:14:53.046 "is_configured": true, 00:14:53.046 "data_offset": 0, 00:14:53.046 "data_size": 65536 00:14:53.046 } 00:14:53.046 ] 00:14:53.046 }' 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.046 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.354 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.354 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:53.354 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.354 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.354 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.354 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:53.354 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:53.354 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.354 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.612 [2024-12-07 17:31:26.737064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.612 "name": "Existed_Raid", 00:14:53.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.612 "strip_size_kb": 64, 00:14:53.612 "state": "configuring", 00:14:53.612 "raid_level": "raid5f", 00:14:53.612 "superblock": false, 00:14:53.612 "num_base_bdevs": 3, 00:14:53.612 "num_base_bdevs_discovered": 2, 00:14:53.612 "num_base_bdevs_operational": 3, 00:14:53.612 "base_bdevs_list": [ 00:14:53.612 { 00:14:53.612 "name": null, 00:14:53.612 "uuid": "017a698b-f5fa-4033-9bd4-43e74a410c0f", 00:14:53.612 "is_configured": false, 00:14:53.612 "data_offset": 0, 00:14:53.612 "data_size": 65536 00:14:53.612 }, 00:14:53.612 { 00:14:53.612 "name": "BaseBdev2", 00:14:53.612 "uuid": "531915aa-b1c1-41ef-aed3-f0cebcf23ab1", 00:14:53.612 "is_configured": true, 00:14:53.612 "data_offset": 0, 00:14:53.612 "data_size": 65536 00:14:53.612 }, 00:14:53.612 { 00:14:53.612 "name": "BaseBdev3", 00:14:53.612 "uuid": "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b", 00:14:53.612 "is_configured": true, 00:14:53.612 "data_offset": 0, 00:14:53.612 "data_size": 65536 00:14:53.612 } 00:14:53.612 ] 00:14:53.612 }' 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.612 17:31:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.870 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.870 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:53.870 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.870 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.870 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.870 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:53.870 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:53.870 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.870 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.870 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 017a698b-f5fa-4033-9bd4-43e74a410c0f 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.130 [2024-12-07 17:31:27.317188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:54.130 [2024-12-07 17:31:27.317243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:54.130 [2024-12-07 17:31:27.317254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:54.130 [2024-12-07 17:31:27.317524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:54.130 [2024-12-07 17:31:27.322549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:54.130 [2024-12-07 17:31:27.322577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:54.130 [2024-12-07 17:31:27.322859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.130 NewBaseBdev 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.130 [ 00:14:54.130 { 00:14:54.130 "name": "NewBaseBdev", 00:14:54.130 "aliases": [ 00:14:54.130 "017a698b-f5fa-4033-9bd4-43e74a410c0f" 00:14:54.130 ], 00:14:54.130 "product_name": "Malloc disk", 00:14:54.130 "block_size": 512, 00:14:54.130 "num_blocks": 65536, 00:14:54.130 "uuid": "017a698b-f5fa-4033-9bd4-43e74a410c0f", 00:14:54.130 "assigned_rate_limits": { 00:14:54.130 "rw_ios_per_sec": 0, 00:14:54.130 "rw_mbytes_per_sec": 0, 00:14:54.130 "r_mbytes_per_sec": 0, 00:14:54.130 "w_mbytes_per_sec": 0 00:14:54.130 }, 00:14:54.130 "claimed": true, 00:14:54.130 "claim_type": "exclusive_write", 00:14:54.130 "zoned": false, 00:14:54.130 "supported_io_types": { 00:14:54.130 "read": true, 00:14:54.130 "write": true, 00:14:54.130 "unmap": true, 00:14:54.130 "flush": true, 00:14:54.130 "reset": true, 00:14:54.130 "nvme_admin": false, 00:14:54.130 "nvme_io": false, 00:14:54.130 "nvme_io_md": false, 00:14:54.130 "write_zeroes": true, 00:14:54.130 "zcopy": true, 00:14:54.130 "get_zone_info": false, 00:14:54.130 "zone_management": false, 00:14:54.130 "zone_append": false, 00:14:54.130 "compare": false, 00:14:54.130 "compare_and_write": false, 00:14:54.130 "abort": true, 00:14:54.130 "seek_hole": false, 00:14:54.130 "seek_data": false, 00:14:54.130 "copy": true, 00:14:54.130 "nvme_iov_md": false 00:14:54.130 }, 00:14:54.130 "memory_domains": [ 00:14:54.130 { 00:14:54.130 "dma_device_id": "system", 00:14:54.130 "dma_device_type": 1 00:14:54.130 }, 00:14:54.130 { 00:14:54.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.130 "dma_device_type": 2 00:14:54.130 } 00:14:54.130 ], 00:14:54.130 "driver_specific": {} 00:14:54.130 } 00:14:54.130 ] 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.130 "name": "Existed_Raid", 00:14:54.130 "uuid": "d7ac549d-fd36-4e66-983a-2f5f9b64b8e7", 00:14:54.130 "strip_size_kb": 64, 00:14:54.130 "state": "online", 00:14:54.130 "raid_level": "raid5f", 00:14:54.130 "superblock": false, 00:14:54.130 "num_base_bdevs": 3, 00:14:54.130 "num_base_bdevs_discovered": 3, 00:14:54.130 "num_base_bdevs_operational": 3, 00:14:54.130 "base_bdevs_list": [ 00:14:54.130 { 00:14:54.130 "name": "NewBaseBdev", 00:14:54.130 "uuid": "017a698b-f5fa-4033-9bd4-43e74a410c0f", 00:14:54.130 "is_configured": true, 00:14:54.130 "data_offset": 0, 00:14:54.130 "data_size": 65536 00:14:54.130 }, 00:14:54.130 { 00:14:54.130 "name": "BaseBdev2", 00:14:54.130 "uuid": "531915aa-b1c1-41ef-aed3-f0cebcf23ab1", 00:14:54.130 "is_configured": true, 00:14:54.130 "data_offset": 0, 00:14:54.130 "data_size": 65536 00:14:54.130 }, 00:14:54.130 { 00:14:54.130 "name": "BaseBdev3", 00:14:54.130 "uuid": "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b", 00:14:54.130 "is_configured": true, 00:14:54.130 "data_offset": 0, 00:14:54.130 "data_size": 65536 00:14:54.130 } 00:14:54.130 ] 00:14:54.130 }' 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.130 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.390 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:54.390 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:54.390 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:54.390 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:54.390 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:54.390 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:54.390 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:54.390 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:54.390 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.390 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.390 [2024-12-07 17:31:27.761114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:54.650 "name": "Existed_Raid", 00:14:54.650 "aliases": [ 00:14:54.650 "d7ac549d-fd36-4e66-983a-2f5f9b64b8e7" 00:14:54.650 ], 00:14:54.650 "product_name": "Raid Volume", 00:14:54.650 "block_size": 512, 00:14:54.650 "num_blocks": 131072, 00:14:54.650 "uuid": "d7ac549d-fd36-4e66-983a-2f5f9b64b8e7", 00:14:54.650 "assigned_rate_limits": { 00:14:54.650 "rw_ios_per_sec": 0, 00:14:54.650 "rw_mbytes_per_sec": 0, 00:14:54.650 "r_mbytes_per_sec": 0, 00:14:54.650 "w_mbytes_per_sec": 0 00:14:54.650 }, 00:14:54.650 "claimed": false, 00:14:54.650 "zoned": false, 00:14:54.650 "supported_io_types": { 00:14:54.650 "read": true, 00:14:54.650 "write": true, 00:14:54.650 "unmap": false, 00:14:54.650 "flush": false, 00:14:54.650 "reset": true, 00:14:54.650 "nvme_admin": false, 00:14:54.650 "nvme_io": false, 00:14:54.650 "nvme_io_md": false, 00:14:54.650 "write_zeroes": true, 00:14:54.650 "zcopy": false, 00:14:54.650 "get_zone_info": false, 00:14:54.650 "zone_management": false, 00:14:54.650 "zone_append": false, 00:14:54.650 "compare": false, 00:14:54.650 "compare_and_write": false, 00:14:54.650 "abort": false, 00:14:54.650 "seek_hole": false, 00:14:54.650 "seek_data": false, 00:14:54.650 "copy": false, 00:14:54.650 "nvme_iov_md": false 00:14:54.650 }, 00:14:54.650 "driver_specific": { 00:14:54.650 "raid": { 00:14:54.650 "uuid": "d7ac549d-fd36-4e66-983a-2f5f9b64b8e7", 00:14:54.650 "strip_size_kb": 64, 00:14:54.650 "state": "online", 00:14:54.650 "raid_level": "raid5f", 00:14:54.650 "superblock": false, 00:14:54.650 "num_base_bdevs": 3, 00:14:54.650 "num_base_bdevs_discovered": 3, 00:14:54.650 "num_base_bdevs_operational": 3, 00:14:54.650 "base_bdevs_list": [ 00:14:54.650 { 00:14:54.650 "name": "NewBaseBdev", 00:14:54.650 "uuid": "017a698b-f5fa-4033-9bd4-43e74a410c0f", 00:14:54.650 "is_configured": true, 00:14:54.650 "data_offset": 0, 00:14:54.650 "data_size": 65536 00:14:54.650 }, 00:14:54.650 { 00:14:54.650 "name": "BaseBdev2", 00:14:54.650 "uuid": "531915aa-b1c1-41ef-aed3-f0cebcf23ab1", 00:14:54.650 "is_configured": true, 00:14:54.650 "data_offset": 0, 00:14:54.650 "data_size": 65536 00:14:54.650 }, 00:14:54.650 { 00:14:54.650 "name": "BaseBdev3", 00:14:54.650 "uuid": "e37b4268-0e95-4bcd-8f5f-a3f8d391e43b", 00:14:54.650 "is_configured": true, 00:14:54.650 "data_offset": 0, 00:14:54.650 "data_size": 65536 00:14:54.650 } 00:14:54.650 ] 00:14:54.650 } 00:14:54.650 } 00:14:54.650 }' 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:54.650 BaseBdev2 00:14:54.650 BaseBdev3' 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.650 17:31:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.650 17:31:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.650 17:31:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.650 17:31:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:54.650 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.650 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.909 [2024-12-07 17:31:28.032458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.909 [2024-12-07 17:31:28.032551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.909 [2024-12-07 17:31:28.032647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.909 [2024-12-07 17:31:28.032979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.909 [2024-12-07 17:31:28.033047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79879 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79879 ']' 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79879 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79879 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.909 killing process with pid 79879 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79879' 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79879 00:14:54.909 [2024-12-07 17:31:28.082282] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.909 17:31:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79879 00:14:55.168 [2024-12-07 17:31:28.391335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:56.548 00:14:56.548 real 0m10.356s 00:14:56.548 user 0m16.055s 00:14:56.548 sys 0m2.084s 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.548 ************************************ 00:14:56.548 END TEST raid5f_state_function_test 00:14:56.548 ************************************ 00:14:56.548 17:31:29 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:56.548 17:31:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:56.548 17:31:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.548 17:31:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:56.548 ************************************ 00:14:56.548 START TEST raid5f_state_function_test_sb 00:14:56.548 ************************************ 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:56.548 Process raid pid: 80500 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80500 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80500' 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80500 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80500 ']' 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.548 17:31:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.548 [2024-12-07 17:31:29.745250] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:56.548 [2024-12-07 17:31:29.745475] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.548 [2024-12-07 17:31:29.922502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.808 [2024-12-07 17:31:30.054674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.067 [2024-12-07 17:31:30.281730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.067 [2024-12-07 17:31:30.281787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.327 [2024-12-07 17:31:30.576683] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.327 [2024-12-07 17:31:30.576755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.327 [2024-12-07 17:31:30.576767] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.327 [2024-12-07 17:31:30.576779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.327 [2024-12-07 17:31:30.576792] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:57.327 [2024-12-07 17:31:30.576805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.327 "name": "Existed_Raid", 00:14:57.327 "uuid": "b032d740-46ef-49c1-a20d-da3e6529128c", 00:14:57.327 "strip_size_kb": 64, 00:14:57.327 "state": "configuring", 00:14:57.327 "raid_level": "raid5f", 00:14:57.327 "superblock": true, 00:14:57.327 "num_base_bdevs": 3, 00:14:57.327 "num_base_bdevs_discovered": 0, 00:14:57.327 "num_base_bdevs_operational": 3, 00:14:57.327 "base_bdevs_list": [ 00:14:57.327 { 00:14:57.327 "name": "BaseBdev1", 00:14:57.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.327 "is_configured": false, 00:14:57.327 "data_offset": 0, 00:14:57.327 "data_size": 0 00:14:57.327 }, 00:14:57.327 { 00:14:57.327 "name": "BaseBdev2", 00:14:57.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.327 "is_configured": false, 00:14:57.327 "data_offset": 0, 00:14:57.327 "data_size": 0 00:14:57.327 }, 00:14:57.327 { 00:14:57.327 "name": "BaseBdev3", 00:14:57.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.327 "is_configured": false, 00:14:57.327 "data_offset": 0, 00:14:57.327 "data_size": 0 00:14:57.327 } 00:14:57.327 ] 00:14:57.327 }' 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.327 17:31:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.897 [2024-12-07 17:31:31.039802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.897 [2024-12-07 17:31:31.039922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.897 [2024-12-07 17:31:31.047801] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.897 [2024-12-07 17:31:31.047896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.897 [2024-12-07 17:31:31.047945] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.897 [2024-12-07 17:31:31.047990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.897 [2024-12-07 17:31:31.048022] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:57.897 [2024-12-07 17:31:31.048050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.897 [2024-12-07 17:31:31.096341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.897 BaseBdev1 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:57.897 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.898 [ 00:14:57.898 { 00:14:57.898 "name": "BaseBdev1", 00:14:57.898 "aliases": [ 00:14:57.898 "d3a09c5a-5ce8-4164-b633-865bac3a1e7a" 00:14:57.898 ], 00:14:57.898 "product_name": "Malloc disk", 00:14:57.898 "block_size": 512, 00:14:57.898 "num_blocks": 65536, 00:14:57.898 "uuid": "d3a09c5a-5ce8-4164-b633-865bac3a1e7a", 00:14:57.898 "assigned_rate_limits": { 00:14:57.898 "rw_ios_per_sec": 0, 00:14:57.898 "rw_mbytes_per_sec": 0, 00:14:57.898 "r_mbytes_per_sec": 0, 00:14:57.898 "w_mbytes_per_sec": 0 00:14:57.898 }, 00:14:57.898 "claimed": true, 00:14:57.898 "claim_type": "exclusive_write", 00:14:57.898 "zoned": false, 00:14:57.898 "supported_io_types": { 00:14:57.898 "read": true, 00:14:57.898 "write": true, 00:14:57.898 "unmap": true, 00:14:57.898 "flush": true, 00:14:57.898 "reset": true, 00:14:57.898 "nvme_admin": false, 00:14:57.898 "nvme_io": false, 00:14:57.898 "nvme_io_md": false, 00:14:57.898 "write_zeroes": true, 00:14:57.898 "zcopy": true, 00:14:57.898 "get_zone_info": false, 00:14:57.898 "zone_management": false, 00:14:57.898 "zone_append": false, 00:14:57.898 "compare": false, 00:14:57.898 "compare_and_write": false, 00:14:57.898 "abort": true, 00:14:57.898 "seek_hole": false, 00:14:57.898 "seek_data": false, 00:14:57.898 "copy": true, 00:14:57.898 "nvme_iov_md": false 00:14:57.898 }, 00:14:57.898 "memory_domains": [ 00:14:57.898 { 00:14:57.898 "dma_device_id": "system", 00:14:57.898 "dma_device_type": 1 00:14:57.898 }, 00:14:57.898 { 00:14:57.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.898 "dma_device_type": 2 00:14:57.898 } 00:14:57.898 ], 00:14:57.898 "driver_specific": {} 00:14:57.898 } 00:14:57.898 ] 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.898 "name": "Existed_Raid", 00:14:57.898 "uuid": "1fd08d27-2fe2-4c93-bd83-1de517efa7d8", 00:14:57.898 "strip_size_kb": 64, 00:14:57.898 "state": "configuring", 00:14:57.898 "raid_level": "raid5f", 00:14:57.898 "superblock": true, 00:14:57.898 "num_base_bdevs": 3, 00:14:57.898 "num_base_bdevs_discovered": 1, 00:14:57.898 "num_base_bdevs_operational": 3, 00:14:57.898 "base_bdevs_list": [ 00:14:57.898 { 00:14:57.898 "name": "BaseBdev1", 00:14:57.898 "uuid": "d3a09c5a-5ce8-4164-b633-865bac3a1e7a", 00:14:57.898 "is_configured": true, 00:14:57.898 "data_offset": 2048, 00:14:57.898 "data_size": 63488 00:14:57.898 }, 00:14:57.898 { 00:14:57.898 "name": "BaseBdev2", 00:14:57.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.898 "is_configured": false, 00:14:57.898 "data_offset": 0, 00:14:57.898 "data_size": 0 00:14:57.898 }, 00:14:57.898 { 00:14:57.898 "name": "BaseBdev3", 00:14:57.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.898 "is_configured": false, 00:14:57.898 "data_offset": 0, 00:14:57.898 "data_size": 0 00:14:57.898 } 00:14:57.898 ] 00:14:57.898 }' 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.898 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.468 [2024-12-07 17:31:31.547569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.468 [2024-12-07 17:31:31.547614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.468 [2024-12-07 17:31:31.559622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.468 [2024-12-07 17:31:31.561658] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.468 [2024-12-07 17:31:31.561772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.468 [2024-12-07 17:31:31.561788] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:58.468 [2024-12-07 17:31:31.561799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.468 "name": "Existed_Raid", 00:14:58.468 "uuid": "dce16385-e9fd-4455-b1d9-6767a5122cf3", 00:14:58.468 "strip_size_kb": 64, 00:14:58.468 "state": "configuring", 00:14:58.468 "raid_level": "raid5f", 00:14:58.468 "superblock": true, 00:14:58.468 "num_base_bdevs": 3, 00:14:58.468 "num_base_bdevs_discovered": 1, 00:14:58.468 "num_base_bdevs_operational": 3, 00:14:58.468 "base_bdevs_list": [ 00:14:58.468 { 00:14:58.468 "name": "BaseBdev1", 00:14:58.468 "uuid": "d3a09c5a-5ce8-4164-b633-865bac3a1e7a", 00:14:58.468 "is_configured": true, 00:14:58.468 "data_offset": 2048, 00:14:58.468 "data_size": 63488 00:14:58.468 }, 00:14:58.468 { 00:14:58.468 "name": "BaseBdev2", 00:14:58.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.468 "is_configured": false, 00:14:58.468 "data_offset": 0, 00:14:58.468 "data_size": 0 00:14:58.468 }, 00:14:58.468 { 00:14:58.468 "name": "BaseBdev3", 00:14:58.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.468 "is_configured": false, 00:14:58.468 "data_offset": 0, 00:14:58.468 "data_size": 0 00:14:58.468 } 00:14:58.468 ] 00:14:58.468 }' 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.468 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.729 17:31:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:58.729 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.729 17:31:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.729 [2024-12-07 17:31:32.005745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.729 BaseBdev2 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.729 [ 00:14:58.729 { 00:14:58.729 "name": "BaseBdev2", 00:14:58.729 "aliases": [ 00:14:58.729 "743bc222-1476-480f-9077-9ac7898e7bb4" 00:14:58.729 ], 00:14:58.729 "product_name": "Malloc disk", 00:14:58.729 "block_size": 512, 00:14:58.729 "num_blocks": 65536, 00:14:58.729 "uuid": "743bc222-1476-480f-9077-9ac7898e7bb4", 00:14:58.729 "assigned_rate_limits": { 00:14:58.729 "rw_ios_per_sec": 0, 00:14:58.729 "rw_mbytes_per_sec": 0, 00:14:58.729 "r_mbytes_per_sec": 0, 00:14:58.729 "w_mbytes_per_sec": 0 00:14:58.729 }, 00:14:58.729 "claimed": true, 00:14:58.729 "claim_type": "exclusive_write", 00:14:58.729 "zoned": false, 00:14:58.729 "supported_io_types": { 00:14:58.729 "read": true, 00:14:58.729 "write": true, 00:14:58.729 "unmap": true, 00:14:58.729 "flush": true, 00:14:58.729 "reset": true, 00:14:58.729 "nvme_admin": false, 00:14:58.729 "nvme_io": false, 00:14:58.729 "nvme_io_md": false, 00:14:58.729 "write_zeroes": true, 00:14:58.729 "zcopy": true, 00:14:58.729 "get_zone_info": false, 00:14:58.729 "zone_management": false, 00:14:58.729 "zone_append": false, 00:14:58.729 "compare": false, 00:14:58.729 "compare_and_write": false, 00:14:58.729 "abort": true, 00:14:58.729 "seek_hole": false, 00:14:58.729 "seek_data": false, 00:14:58.729 "copy": true, 00:14:58.729 "nvme_iov_md": false 00:14:58.729 }, 00:14:58.729 "memory_domains": [ 00:14:58.729 { 00:14:58.729 "dma_device_id": "system", 00:14:58.729 "dma_device_type": 1 00:14:58.729 }, 00:14:58.729 { 00:14:58.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.729 "dma_device_type": 2 00:14:58.729 } 00:14:58.729 ], 00:14:58.729 "driver_specific": {} 00:14:58.729 } 00:14:58.729 ] 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.729 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.729 "name": "Existed_Raid", 00:14:58.729 "uuid": "dce16385-e9fd-4455-b1d9-6767a5122cf3", 00:14:58.729 "strip_size_kb": 64, 00:14:58.729 "state": "configuring", 00:14:58.729 "raid_level": "raid5f", 00:14:58.729 "superblock": true, 00:14:58.729 "num_base_bdevs": 3, 00:14:58.729 "num_base_bdevs_discovered": 2, 00:14:58.729 "num_base_bdevs_operational": 3, 00:14:58.729 "base_bdevs_list": [ 00:14:58.729 { 00:14:58.729 "name": "BaseBdev1", 00:14:58.729 "uuid": "d3a09c5a-5ce8-4164-b633-865bac3a1e7a", 00:14:58.729 "is_configured": true, 00:14:58.729 "data_offset": 2048, 00:14:58.729 "data_size": 63488 00:14:58.730 }, 00:14:58.730 { 00:14:58.730 "name": "BaseBdev2", 00:14:58.730 "uuid": "743bc222-1476-480f-9077-9ac7898e7bb4", 00:14:58.730 "is_configured": true, 00:14:58.730 "data_offset": 2048, 00:14:58.730 "data_size": 63488 00:14:58.730 }, 00:14:58.730 { 00:14:58.730 "name": "BaseBdev3", 00:14:58.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.730 "is_configured": false, 00:14:58.730 "data_offset": 0, 00:14:58.730 "data_size": 0 00:14:58.730 } 00:14:58.730 ] 00:14:58.730 }' 00:14:58.730 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.730 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.320 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:59.320 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.320 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.320 [2024-12-07 17:31:32.520629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.320 [2024-12-07 17:31:32.521032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:59.321 [2024-12-07 17:31:32.521060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:59.321 [2024-12-07 17:31:32.521362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:59.321 BaseBdev3 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.321 [2024-12-07 17:31:32.526899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:59.321 [2024-12-07 17:31:32.526990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:59.321 [2024-12-07 17:31:32.527211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.321 [ 00:14:59.321 { 00:14:59.321 "name": "BaseBdev3", 00:14:59.321 "aliases": [ 00:14:59.321 "e447abac-2b01-4485-b177-c0ceb222beb5" 00:14:59.321 ], 00:14:59.321 "product_name": "Malloc disk", 00:14:59.321 "block_size": 512, 00:14:59.321 "num_blocks": 65536, 00:14:59.321 "uuid": "e447abac-2b01-4485-b177-c0ceb222beb5", 00:14:59.321 "assigned_rate_limits": { 00:14:59.321 "rw_ios_per_sec": 0, 00:14:59.321 "rw_mbytes_per_sec": 0, 00:14:59.321 "r_mbytes_per_sec": 0, 00:14:59.321 "w_mbytes_per_sec": 0 00:14:59.321 }, 00:14:59.321 "claimed": true, 00:14:59.321 "claim_type": "exclusive_write", 00:14:59.321 "zoned": false, 00:14:59.321 "supported_io_types": { 00:14:59.321 "read": true, 00:14:59.321 "write": true, 00:14:59.321 "unmap": true, 00:14:59.321 "flush": true, 00:14:59.321 "reset": true, 00:14:59.321 "nvme_admin": false, 00:14:59.321 "nvme_io": false, 00:14:59.321 "nvme_io_md": false, 00:14:59.321 "write_zeroes": true, 00:14:59.321 "zcopy": true, 00:14:59.321 "get_zone_info": false, 00:14:59.321 "zone_management": false, 00:14:59.321 "zone_append": false, 00:14:59.321 "compare": false, 00:14:59.321 "compare_and_write": false, 00:14:59.321 "abort": true, 00:14:59.321 "seek_hole": false, 00:14:59.321 "seek_data": false, 00:14:59.321 "copy": true, 00:14:59.321 "nvme_iov_md": false 00:14:59.321 }, 00:14:59.321 "memory_domains": [ 00:14:59.321 { 00:14:59.321 "dma_device_id": "system", 00:14:59.321 "dma_device_type": 1 00:14:59.321 }, 00:14:59.321 { 00:14:59.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.321 "dma_device_type": 2 00:14:59.321 } 00:14:59.321 ], 00:14:59.321 "driver_specific": {} 00:14:59.321 } 00:14:59.321 ] 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.321 "name": "Existed_Raid", 00:14:59.321 "uuid": "dce16385-e9fd-4455-b1d9-6767a5122cf3", 00:14:59.321 "strip_size_kb": 64, 00:14:59.321 "state": "online", 00:14:59.321 "raid_level": "raid5f", 00:14:59.321 "superblock": true, 00:14:59.321 "num_base_bdevs": 3, 00:14:59.321 "num_base_bdevs_discovered": 3, 00:14:59.321 "num_base_bdevs_operational": 3, 00:14:59.321 "base_bdevs_list": [ 00:14:59.321 { 00:14:59.321 "name": "BaseBdev1", 00:14:59.321 "uuid": "d3a09c5a-5ce8-4164-b633-865bac3a1e7a", 00:14:59.321 "is_configured": true, 00:14:59.321 "data_offset": 2048, 00:14:59.321 "data_size": 63488 00:14:59.321 }, 00:14:59.321 { 00:14:59.321 "name": "BaseBdev2", 00:14:59.321 "uuid": "743bc222-1476-480f-9077-9ac7898e7bb4", 00:14:59.321 "is_configured": true, 00:14:59.321 "data_offset": 2048, 00:14:59.321 "data_size": 63488 00:14:59.321 }, 00:14:59.321 { 00:14:59.321 "name": "BaseBdev3", 00:14:59.321 "uuid": "e447abac-2b01-4485-b177-c0ceb222beb5", 00:14:59.321 "is_configured": true, 00:14:59.321 "data_offset": 2048, 00:14:59.321 "data_size": 63488 00:14:59.321 } 00:14:59.321 ] 00:14:59.321 }' 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.321 17:31:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.892 [2024-12-07 17:31:33.033208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.892 "name": "Existed_Raid", 00:14:59.892 "aliases": [ 00:14:59.892 "dce16385-e9fd-4455-b1d9-6767a5122cf3" 00:14:59.892 ], 00:14:59.892 "product_name": "Raid Volume", 00:14:59.892 "block_size": 512, 00:14:59.892 "num_blocks": 126976, 00:14:59.892 "uuid": "dce16385-e9fd-4455-b1d9-6767a5122cf3", 00:14:59.892 "assigned_rate_limits": { 00:14:59.892 "rw_ios_per_sec": 0, 00:14:59.892 "rw_mbytes_per_sec": 0, 00:14:59.892 "r_mbytes_per_sec": 0, 00:14:59.892 "w_mbytes_per_sec": 0 00:14:59.892 }, 00:14:59.892 "claimed": false, 00:14:59.892 "zoned": false, 00:14:59.892 "supported_io_types": { 00:14:59.892 "read": true, 00:14:59.892 "write": true, 00:14:59.892 "unmap": false, 00:14:59.892 "flush": false, 00:14:59.892 "reset": true, 00:14:59.892 "nvme_admin": false, 00:14:59.892 "nvme_io": false, 00:14:59.892 "nvme_io_md": false, 00:14:59.892 "write_zeroes": true, 00:14:59.892 "zcopy": false, 00:14:59.892 "get_zone_info": false, 00:14:59.892 "zone_management": false, 00:14:59.892 "zone_append": false, 00:14:59.892 "compare": false, 00:14:59.892 "compare_and_write": false, 00:14:59.892 "abort": false, 00:14:59.892 "seek_hole": false, 00:14:59.892 "seek_data": false, 00:14:59.892 "copy": false, 00:14:59.892 "nvme_iov_md": false 00:14:59.892 }, 00:14:59.892 "driver_specific": { 00:14:59.892 "raid": { 00:14:59.892 "uuid": "dce16385-e9fd-4455-b1d9-6767a5122cf3", 00:14:59.892 "strip_size_kb": 64, 00:14:59.892 "state": "online", 00:14:59.892 "raid_level": "raid5f", 00:14:59.892 "superblock": true, 00:14:59.892 "num_base_bdevs": 3, 00:14:59.892 "num_base_bdevs_discovered": 3, 00:14:59.892 "num_base_bdevs_operational": 3, 00:14:59.892 "base_bdevs_list": [ 00:14:59.892 { 00:14:59.892 "name": "BaseBdev1", 00:14:59.892 "uuid": "d3a09c5a-5ce8-4164-b633-865bac3a1e7a", 00:14:59.892 "is_configured": true, 00:14:59.892 "data_offset": 2048, 00:14:59.892 "data_size": 63488 00:14:59.892 }, 00:14:59.892 { 00:14:59.892 "name": "BaseBdev2", 00:14:59.892 "uuid": "743bc222-1476-480f-9077-9ac7898e7bb4", 00:14:59.892 "is_configured": true, 00:14:59.892 "data_offset": 2048, 00:14:59.892 "data_size": 63488 00:14:59.892 }, 00:14:59.892 { 00:14:59.892 "name": "BaseBdev3", 00:14:59.892 "uuid": "e447abac-2b01-4485-b177-c0ceb222beb5", 00:14:59.892 "is_configured": true, 00:14:59.892 "data_offset": 2048, 00:14:59.892 "data_size": 63488 00:14:59.892 } 00:14:59.892 ] 00:14:59.892 } 00:14:59.892 } 00:14:59.892 }' 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:59.892 BaseBdev2 00:14:59.892 BaseBdev3' 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.892 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.152 [2024-12-07 17:31:33.320824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:00.152 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.153 "name": "Existed_Raid", 00:15:00.153 "uuid": "dce16385-e9fd-4455-b1d9-6767a5122cf3", 00:15:00.153 "strip_size_kb": 64, 00:15:00.153 "state": "online", 00:15:00.153 "raid_level": "raid5f", 00:15:00.153 "superblock": true, 00:15:00.153 "num_base_bdevs": 3, 00:15:00.153 "num_base_bdevs_discovered": 2, 00:15:00.153 "num_base_bdevs_operational": 2, 00:15:00.153 "base_bdevs_list": [ 00:15:00.153 { 00:15:00.153 "name": null, 00:15:00.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.153 "is_configured": false, 00:15:00.153 "data_offset": 0, 00:15:00.153 "data_size": 63488 00:15:00.153 }, 00:15:00.153 { 00:15:00.153 "name": "BaseBdev2", 00:15:00.153 "uuid": "743bc222-1476-480f-9077-9ac7898e7bb4", 00:15:00.153 "is_configured": true, 00:15:00.153 "data_offset": 2048, 00:15:00.153 "data_size": 63488 00:15:00.153 }, 00:15:00.153 { 00:15:00.153 "name": "BaseBdev3", 00:15:00.153 "uuid": "e447abac-2b01-4485-b177-c0ceb222beb5", 00:15:00.153 "is_configured": true, 00:15:00.153 "data_offset": 2048, 00:15:00.153 "data_size": 63488 00:15:00.153 } 00:15:00.153 ] 00:15:00.153 }' 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.153 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.724 17:31:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.724 [2024-12-07 17:31:33.906955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.724 [2024-12-07 17:31:33.907199] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.724 [2024-12-07 17:31:34.003566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.724 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.724 [2024-12-07 17:31:34.059498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:00.724 [2024-12-07 17:31:34.059554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:00.985 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.986 BaseBdev2 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.986 [ 00:15:00.986 { 00:15:00.986 "name": "BaseBdev2", 00:15:00.986 "aliases": [ 00:15:00.986 "1f208a7b-91e0-46d3-adf2-25f8de35710c" 00:15:00.986 ], 00:15:00.986 "product_name": "Malloc disk", 00:15:00.986 "block_size": 512, 00:15:00.986 "num_blocks": 65536, 00:15:00.986 "uuid": "1f208a7b-91e0-46d3-adf2-25f8de35710c", 00:15:00.986 "assigned_rate_limits": { 00:15:00.986 "rw_ios_per_sec": 0, 00:15:00.986 "rw_mbytes_per_sec": 0, 00:15:00.986 "r_mbytes_per_sec": 0, 00:15:00.986 "w_mbytes_per_sec": 0 00:15:00.986 }, 00:15:00.986 "claimed": false, 00:15:00.986 "zoned": false, 00:15:00.986 "supported_io_types": { 00:15:00.986 "read": true, 00:15:00.986 "write": true, 00:15:00.986 "unmap": true, 00:15:00.986 "flush": true, 00:15:00.986 "reset": true, 00:15:00.986 "nvme_admin": false, 00:15:00.986 "nvme_io": false, 00:15:00.986 "nvme_io_md": false, 00:15:00.986 "write_zeroes": true, 00:15:00.986 "zcopy": true, 00:15:00.986 "get_zone_info": false, 00:15:00.986 "zone_management": false, 00:15:00.986 "zone_append": false, 00:15:00.986 "compare": false, 00:15:00.986 "compare_and_write": false, 00:15:00.986 "abort": true, 00:15:00.986 "seek_hole": false, 00:15:00.986 "seek_data": false, 00:15:00.986 "copy": true, 00:15:00.986 "nvme_iov_md": false 00:15:00.986 }, 00:15:00.986 "memory_domains": [ 00:15:00.986 { 00:15:00.986 "dma_device_id": "system", 00:15:00.986 "dma_device_type": 1 00:15:00.986 }, 00:15:00.986 { 00:15:00.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.986 "dma_device_type": 2 00:15:00.986 } 00:15:00.986 ], 00:15:00.986 "driver_specific": {} 00:15:00.986 } 00:15:00.986 ] 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.986 BaseBdev3 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.986 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.246 [ 00:15:01.246 { 00:15:01.246 "name": "BaseBdev3", 00:15:01.246 "aliases": [ 00:15:01.246 "dc0300da-7523-4c21-9ce4-2723d88c46d7" 00:15:01.246 ], 00:15:01.246 "product_name": "Malloc disk", 00:15:01.246 "block_size": 512, 00:15:01.246 "num_blocks": 65536, 00:15:01.246 "uuid": "dc0300da-7523-4c21-9ce4-2723d88c46d7", 00:15:01.246 "assigned_rate_limits": { 00:15:01.246 "rw_ios_per_sec": 0, 00:15:01.246 "rw_mbytes_per_sec": 0, 00:15:01.246 "r_mbytes_per_sec": 0, 00:15:01.246 "w_mbytes_per_sec": 0 00:15:01.246 }, 00:15:01.246 "claimed": false, 00:15:01.246 "zoned": false, 00:15:01.246 "supported_io_types": { 00:15:01.246 "read": true, 00:15:01.246 "write": true, 00:15:01.246 "unmap": true, 00:15:01.246 "flush": true, 00:15:01.246 "reset": true, 00:15:01.246 "nvme_admin": false, 00:15:01.246 "nvme_io": false, 00:15:01.246 "nvme_io_md": false, 00:15:01.246 "write_zeroes": true, 00:15:01.246 "zcopy": true, 00:15:01.246 "get_zone_info": false, 00:15:01.246 "zone_management": false, 00:15:01.246 "zone_append": false, 00:15:01.246 "compare": false, 00:15:01.246 "compare_and_write": false, 00:15:01.246 "abort": true, 00:15:01.246 "seek_hole": false, 00:15:01.246 "seek_data": false, 00:15:01.246 "copy": true, 00:15:01.246 "nvme_iov_md": false 00:15:01.246 }, 00:15:01.246 "memory_domains": [ 00:15:01.246 { 00:15:01.246 "dma_device_id": "system", 00:15:01.246 "dma_device_type": 1 00:15:01.246 }, 00:15:01.246 { 00:15:01.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.246 "dma_device_type": 2 00:15:01.246 } 00:15:01.246 ], 00:15:01.246 "driver_specific": {} 00:15:01.246 } 00:15:01.246 ] 00:15:01.246 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.247 [2024-12-07 17:31:34.382808] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.247 [2024-12-07 17:31:34.382941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.247 [2024-12-07 17:31:34.382993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.247 [2024-12-07 17:31:34.385083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.247 "name": "Existed_Raid", 00:15:01.247 "uuid": "fdef4ab1-a236-44d6-b384-0904a4b3588c", 00:15:01.247 "strip_size_kb": 64, 00:15:01.247 "state": "configuring", 00:15:01.247 "raid_level": "raid5f", 00:15:01.247 "superblock": true, 00:15:01.247 "num_base_bdevs": 3, 00:15:01.247 "num_base_bdevs_discovered": 2, 00:15:01.247 "num_base_bdevs_operational": 3, 00:15:01.247 "base_bdevs_list": [ 00:15:01.247 { 00:15:01.247 "name": "BaseBdev1", 00:15:01.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.247 "is_configured": false, 00:15:01.247 "data_offset": 0, 00:15:01.247 "data_size": 0 00:15:01.247 }, 00:15:01.247 { 00:15:01.247 "name": "BaseBdev2", 00:15:01.247 "uuid": "1f208a7b-91e0-46d3-adf2-25f8de35710c", 00:15:01.247 "is_configured": true, 00:15:01.247 "data_offset": 2048, 00:15:01.247 "data_size": 63488 00:15:01.247 }, 00:15:01.247 { 00:15:01.247 "name": "BaseBdev3", 00:15:01.247 "uuid": "dc0300da-7523-4c21-9ce4-2723d88c46d7", 00:15:01.247 "is_configured": true, 00:15:01.247 "data_offset": 2048, 00:15:01.247 "data_size": 63488 00:15:01.247 } 00:15:01.247 ] 00:15:01.247 }' 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.247 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.507 [2024-12-07 17:31:34.837989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.507 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.768 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.768 "name": "Existed_Raid", 00:15:01.768 "uuid": "fdef4ab1-a236-44d6-b384-0904a4b3588c", 00:15:01.768 "strip_size_kb": 64, 00:15:01.768 "state": "configuring", 00:15:01.768 "raid_level": "raid5f", 00:15:01.768 "superblock": true, 00:15:01.768 "num_base_bdevs": 3, 00:15:01.768 "num_base_bdevs_discovered": 1, 00:15:01.768 "num_base_bdevs_operational": 3, 00:15:01.768 "base_bdevs_list": [ 00:15:01.768 { 00:15:01.768 "name": "BaseBdev1", 00:15:01.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.768 "is_configured": false, 00:15:01.768 "data_offset": 0, 00:15:01.768 "data_size": 0 00:15:01.768 }, 00:15:01.768 { 00:15:01.768 "name": null, 00:15:01.768 "uuid": "1f208a7b-91e0-46d3-adf2-25f8de35710c", 00:15:01.768 "is_configured": false, 00:15:01.768 "data_offset": 0, 00:15:01.768 "data_size": 63488 00:15:01.768 }, 00:15:01.768 { 00:15:01.768 "name": "BaseBdev3", 00:15:01.768 "uuid": "dc0300da-7523-4c21-9ce4-2723d88c46d7", 00:15:01.768 "is_configured": true, 00:15:01.768 "data_offset": 2048, 00:15:01.768 "data_size": 63488 00:15:01.768 } 00:15:01.768 ] 00:15:01.768 }' 00:15:01.768 17:31:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.768 17:31:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.029 [2024-12-07 17:31:35.378229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.029 BaseBdev1 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.029 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.296 [ 00:15:02.296 { 00:15:02.296 "name": "BaseBdev1", 00:15:02.296 "aliases": [ 00:15:02.296 "662a9f3e-a568-4780-94d0-513573dfac7e" 00:15:02.296 ], 00:15:02.296 "product_name": "Malloc disk", 00:15:02.296 "block_size": 512, 00:15:02.296 "num_blocks": 65536, 00:15:02.296 "uuid": "662a9f3e-a568-4780-94d0-513573dfac7e", 00:15:02.296 "assigned_rate_limits": { 00:15:02.296 "rw_ios_per_sec": 0, 00:15:02.296 "rw_mbytes_per_sec": 0, 00:15:02.296 "r_mbytes_per_sec": 0, 00:15:02.296 "w_mbytes_per_sec": 0 00:15:02.296 }, 00:15:02.296 "claimed": true, 00:15:02.296 "claim_type": "exclusive_write", 00:15:02.296 "zoned": false, 00:15:02.296 "supported_io_types": { 00:15:02.296 "read": true, 00:15:02.296 "write": true, 00:15:02.296 "unmap": true, 00:15:02.296 "flush": true, 00:15:02.296 "reset": true, 00:15:02.296 "nvme_admin": false, 00:15:02.296 "nvme_io": false, 00:15:02.296 "nvme_io_md": false, 00:15:02.296 "write_zeroes": true, 00:15:02.296 "zcopy": true, 00:15:02.296 "get_zone_info": false, 00:15:02.296 "zone_management": false, 00:15:02.296 "zone_append": false, 00:15:02.296 "compare": false, 00:15:02.296 "compare_and_write": false, 00:15:02.296 "abort": true, 00:15:02.296 "seek_hole": false, 00:15:02.296 "seek_data": false, 00:15:02.296 "copy": true, 00:15:02.296 "nvme_iov_md": false 00:15:02.296 }, 00:15:02.296 "memory_domains": [ 00:15:02.296 { 00:15:02.296 "dma_device_id": "system", 00:15:02.296 "dma_device_type": 1 00:15:02.296 }, 00:15:02.296 { 00:15:02.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.296 "dma_device_type": 2 00:15:02.296 } 00:15:02.296 ], 00:15:02.296 "driver_specific": {} 00:15:02.296 } 00:15:02.296 ] 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.296 "name": "Existed_Raid", 00:15:02.296 "uuid": "fdef4ab1-a236-44d6-b384-0904a4b3588c", 00:15:02.296 "strip_size_kb": 64, 00:15:02.296 "state": "configuring", 00:15:02.296 "raid_level": "raid5f", 00:15:02.296 "superblock": true, 00:15:02.296 "num_base_bdevs": 3, 00:15:02.296 "num_base_bdevs_discovered": 2, 00:15:02.296 "num_base_bdevs_operational": 3, 00:15:02.296 "base_bdevs_list": [ 00:15:02.296 { 00:15:02.296 "name": "BaseBdev1", 00:15:02.296 "uuid": "662a9f3e-a568-4780-94d0-513573dfac7e", 00:15:02.296 "is_configured": true, 00:15:02.296 "data_offset": 2048, 00:15:02.296 "data_size": 63488 00:15:02.296 }, 00:15:02.296 { 00:15:02.296 "name": null, 00:15:02.296 "uuid": "1f208a7b-91e0-46d3-adf2-25f8de35710c", 00:15:02.296 "is_configured": false, 00:15:02.296 "data_offset": 0, 00:15:02.296 "data_size": 63488 00:15:02.296 }, 00:15:02.296 { 00:15:02.296 "name": "BaseBdev3", 00:15:02.296 "uuid": "dc0300da-7523-4c21-9ce4-2723d88c46d7", 00:15:02.296 "is_configured": true, 00:15:02.296 "data_offset": 2048, 00:15:02.296 "data_size": 63488 00:15:02.296 } 00:15:02.296 ] 00:15:02.296 }' 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.296 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.563 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.563 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.563 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.563 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.563 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.821 [2024-12-07 17:31:35.965295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.821 17:31:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.821 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.821 "name": "Existed_Raid", 00:15:02.821 "uuid": "fdef4ab1-a236-44d6-b384-0904a4b3588c", 00:15:02.821 "strip_size_kb": 64, 00:15:02.821 "state": "configuring", 00:15:02.821 "raid_level": "raid5f", 00:15:02.821 "superblock": true, 00:15:02.821 "num_base_bdevs": 3, 00:15:02.821 "num_base_bdevs_discovered": 1, 00:15:02.821 "num_base_bdevs_operational": 3, 00:15:02.821 "base_bdevs_list": [ 00:15:02.821 { 00:15:02.821 "name": "BaseBdev1", 00:15:02.821 "uuid": "662a9f3e-a568-4780-94d0-513573dfac7e", 00:15:02.821 "is_configured": true, 00:15:02.821 "data_offset": 2048, 00:15:02.821 "data_size": 63488 00:15:02.821 }, 00:15:02.821 { 00:15:02.821 "name": null, 00:15:02.821 "uuid": "1f208a7b-91e0-46d3-adf2-25f8de35710c", 00:15:02.821 "is_configured": false, 00:15:02.821 "data_offset": 0, 00:15:02.821 "data_size": 63488 00:15:02.821 }, 00:15:02.821 { 00:15:02.821 "name": null, 00:15:02.821 "uuid": "dc0300da-7523-4c21-9ce4-2723d88c46d7", 00:15:02.821 "is_configured": false, 00:15:02.821 "data_offset": 0, 00:15:02.821 "data_size": 63488 00:15:02.821 } 00:15:02.821 ] 00:15:02.821 }' 00:15:02.821 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.821 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.080 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.080 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:03.080 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.080 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.080 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.339 [2024-12-07 17:31:36.488540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.339 "name": "Existed_Raid", 00:15:03.339 "uuid": "fdef4ab1-a236-44d6-b384-0904a4b3588c", 00:15:03.339 "strip_size_kb": 64, 00:15:03.339 "state": "configuring", 00:15:03.339 "raid_level": "raid5f", 00:15:03.339 "superblock": true, 00:15:03.339 "num_base_bdevs": 3, 00:15:03.339 "num_base_bdevs_discovered": 2, 00:15:03.339 "num_base_bdevs_operational": 3, 00:15:03.339 "base_bdevs_list": [ 00:15:03.339 { 00:15:03.339 "name": "BaseBdev1", 00:15:03.339 "uuid": "662a9f3e-a568-4780-94d0-513573dfac7e", 00:15:03.339 "is_configured": true, 00:15:03.339 "data_offset": 2048, 00:15:03.339 "data_size": 63488 00:15:03.339 }, 00:15:03.339 { 00:15:03.339 "name": null, 00:15:03.339 "uuid": "1f208a7b-91e0-46d3-adf2-25f8de35710c", 00:15:03.339 "is_configured": false, 00:15:03.339 "data_offset": 0, 00:15:03.339 "data_size": 63488 00:15:03.339 }, 00:15:03.339 { 00:15:03.339 "name": "BaseBdev3", 00:15:03.339 "uuid": "dc0300da-7523-4c21-9ce4-2723d88c46d7", 00:15:03.339 "is_configured": true, 00:15:03.339 "data_offset": 2048, 00:15:03.339 "data_size": 63488 00:15:03.339 } 00:15:03.339 ] 00:15:03.339 }' 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.339 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.598 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.598 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:03.598 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.598 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.598 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.598 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:03.598 17:31:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:03.598 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.598 17:31:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.598 [2024-12-07 17:31:36.967810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.860 "name": "Existed_Raid", 00:15:03.860 "uuid": "fdef4ab1-a236-44d6-b384-0904a4b3588c", 00:15:03.860 "strip_size_kb": 64, 00:15:03.860 "state": "configuring", 00:15:03.860 "raid_level": "raid5f", 00:15:03.860 "superblock": true, 00:15:03.860 "num_base_bdevs": 3, 00:15:03.860 "num_base_bdevs_discovered": 1, 00:15:03.860 "num_base_bdevs_operational": 3, 00:15:03.860 "base_bdevs_list": [ 00:15:03.860 { 00:15:03.860 "name": null, 00:15:03.860 "uuid": "662a9f3e-a568-4780-94d0-513573dfac7e", 00:15:03.860 "is_configured": false, 00:15:03.860 "data_offset": 0, 00:15:03.860 "data_size": 63488 00:15:03.860 }, 00:15:03.860 { 00:15:03.860 "name": null, 00:15:03.860 "uuid": "1f208a7b-91e0-46d3-adf2-25f8de35710c", 00:15:03.860 "is_configured": false, 00:15:03.860 "data_offset": 0, 00:15:03.860 "data_size": 63488 00:15:03.860 }, 00:15:03.860 { 00:15:03.860 "name": "BaseBdev3", 00:15:03.860 "uuid": "dc0300da-7523-4c21-9ce4-2723d88c46d7", 00:15:03.860 "is_configured": true, 00:15:03.860 "data_offset": 2048, 00:15:03.860 "data_size": 63488 00:15:03.860 } 00:15:03.860 ] 00:15:03.860 }' 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.860 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.429 [2024-12-07 17:31:37.569615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.429 "name": "Existed_Raid", 00:15:04.429 "uuid": "fdef4ab1-a236-44d6-b384-0904a4b3588c", 00:15:04.429 "strip_size_kb": 64, 00:15:04.429 "state": "configuring", 00:15:04.429 "raid_level": "raid5f", 00:15:04.429 "superblock": true, 00:15:04.429 "num_base_bdevs": 3, 00:15:04.429 "num_base_bdevs_discovered": 2, 00:15:04.429 "num_base_bdevs_operational": 3, 00:15:04.429 "base_bdevs_list": [ 00:15:04.429 { 00:15:04.429 "name": null, 00:15:04.429 "uuid": "662a9f3e-a568-4780-94d0-513573dfac7e", 00:15:04.429 "is_configured": false, 00:15:04.429 "data_offset": 0, 00:15:04.429 "data_size": 63488 00:15:04.429 }, 00:15:04.429 { 00:15:04.429 "name": "BaseBdev2", 00:15:04.429 "uuid": "1f208a7b-91e0-46d3-adf2-25f8de35710c", 00:15:04.429 "is_configured": true, 00:15:04.429 "data_offset": 2048, 00:15:04.429 "data_size": 63488 00:15:04.429 }, 00:15:04.429 { 00:15:04.429 "name": "BaseBdev3", 00:15:04.429 "uuid": "dc0300da-7523-4c21-9ce4-2723d88c46d7", 00:15:04.429 "is_configured": true, 00:15:04.429 "data_offset": 2048, 00:15:04.429 "data_size": 63488 00:15:04.429 } 00:15:04.429 ] 00:15:04.429 }' 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.429 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.688 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.688 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.688 17:31:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.688 17:31:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:04.688 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.688 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:04.688 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:04.688 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.688 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.688 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.688 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 662a9f3e-a568-4780-94d0-513573dfac7e 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.948 [2024-12-07 17:31:38.129434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:04.948 [2024-12-07 17:31:38.129768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:04.948 [2024-12-07 17:31:38.129829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:04.948 [2024-12-07 17:31:38.130151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:04.948 NewBaseBdev 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.948 [2024-12-07 17:31:38.135450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:04.948 [2024-12-07 17:31:38.135524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:04.948 [2024-12-07 17:31:38.135747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.948 [ 00:15:04.948 { 00:15:04.948 "name": "NewBaseBdev", 00:15:04.948 "aliases": [ 00:15:04.948 "662a9f3e-a568-4780-94d0-513573dfac7e" 00:15:04.948 ], 00:15:04.948 "product_name": "Malloc disk", 00:15:04.948 "block_size": 512, 00:15:04.948 "num_blocks": 65536, 00:15:04.948 "uuid": "662a9f3e-a568-4780-94d0-513573dfac7e", 00:15:04.948 "assigned_rate_limits": { 00:15:04.948 "rw_ios_per_sec": 0, 00:15:04.948 "rw_mbytes_per_sec": 0, 00:15:04.948 "r_mbytes_per_sec": 0, 00:15:04.948 "w_mbytes_per_sec": 0 00:15:04.948 }, 00:15:04.948 "claimed": true, 00:15:04.948 "claim_type": "exclusive_write", 00:15:04.948 "zoned": false, 00:15:04.948 "supported_io_types": { 00:15:04.948 "read": true, 00:15:04.948 "write": true, 00:15:04.948 "unmap": true, 00:15:04.948 "flush": true, 00:15:04.948 "reset": true, 00:15:04.948 "nvme_admin": false, 00:15:04.948 "nvme_io": false, 00:15:04.948 "nvme_io_md": false, 00:15:04.948 "write_zeroes": true, 00:15:04.948 "zcopy": true, 00:15:04.948 "get_zone_info": false, 00:15:04.948 "zone_management": false, 00:15:04.948 "zone_append": false, 00:15:04.948 "compare": false, 00:15:04.948 "compare_and_write": false, 00:15:04.948 "abort": true, 00:15:04.948 "seek_hole": false, 00:15:04.948 "seek_data": false, 00:15:04.948 "copy": true, 00:15:04.948 "nvme_iov_md": false 00:15:04.948 }, 00:15:04.948 "memory_domains": [ 00:15:04.948 { 00:15:04.948 "dma_device_id": "system", 00:15:04.948 "dma_device_type": 1 00:15:04.948 }, 00:15:04.948 { 00:15:04.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.948 "dma_device_type": 2 00:15:04.948 } 00:15:04.948 ], 00:15:04.948 "driver_specific": {} 00:15:04.948 } 00:15:04.948 ] 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.948 "name": "Existed_Raid", 00:15:04.948 "uuid": "fdef4ab1-a236-44d6-b384-0904a4b3588c", 00:15:04.948 "strip_size_kb": 64, 00:15:04.948 "state": "online", 00:15:04.948 "raid_level": "raid5f", 00:15:04.948 "superblock": true, 00:15:04.948 "num_base_bdevs": 3, 00:15:04.948 "num_base_bdevs_discovered": 3, 00:15:04.948 "num_base_bdevs_operational": 3, 00:15:04.948 "base_bdevs_list": [ 00:15:04.948 { 00:15:04.948 "name": "NewBaseBdev", 00:15:04.948 "uuid": "662a9f3e-a568-4780-94d0-513573dfac7e", 00:15:04.948 "is_configured": true, 00:15:04.948 "data_offset": 2048, 00:15:04.948 "data_size": 63488 00:15:04.948 }, 00:15:04.948 { 00:15:04.948 "name": "BaseBdev2", 00:15:04.948 "uuid": "1f208a7b-91e0-46d3-adf2-25f8de35710c", 00:15:04.948 "is_configured": true, 00:15:04.948 "data_offset": 2048, 00:15:04.948 "data_size": 63488 00:15:04.948 }, 00:15:04.948 { 00:15:04.948 "name": "BaseBdev3", 00:15:04.948 "uuid": "dc0300da-7523-4c21-9ce4-2723d88c46d7", 00:15:04.948 "is_configured": true, 00:15:04.948 "data_offset": 2048, 00:15:04.948 "data_size": 63488 00:15:04.948 } 00:15:04.948 ] 00:15:04.948 }' 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.948 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:05.525 [2024-12-07 17:31:38.625990] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:05.525 "name": "Existed_Raid", 00:15:05.525 "aliases": [ 00:15:05.525 "fdef4ab1-a236-44d6-b384-0904a4b3588c" 00:15:05.525 ], 00:15:05.525 "product_name": "Raid Volume", 00:15:05.525 "block_size": 512, 00:15:05.525 "num_blocks": 126976, 00:15:05.525 "uuid": "fdef4ab1-a236-44d6-b384-0904a4b3588c", 00:15:05.525 "assigned_rate_limits": { 00:15:05.525 "rw_ios_per_sec": 0, 00:15:05.525 "rw_mbytes_per_sec": 0, 00:15:05.525 "r_mbytes_per_sec": 0, 00:15:05.525 "w_mbytes_per_sec": 0 00:15:05.525 }, 00:15:05.525 "claimed": false, 00:15:05.525 "zoned": false, 00:15:05.525 "supported_io_types": { 00:15:05.525 "read": true, 00:15:05.525 "write": true, 00:15:05.525 "unmap": false, 00:15:05.525 "flush": false, 00:15:05.525 "reset": true, 00:15:05.525 "nvme_admin": false, 00:15:05.525 "nvme_io": false, 00:15:05.525 "nvme_io_md": false, 00:15:05.525 "write_zeroes": true, 00:15:05.525 "zcopy": false, 00:15:05.525 "get_zone_info": false, 00:15:05.525 "zone_management": false, 00:15:05.525 "zone_append": false, 00:15:05.525 "compare": false, 00:15:05.525 "compare_and_write": false, 00:15:05.525 "abort": false, 00:15:05.525 "seek_hole": false, 00:15:05.525 "seek_data": false, 00:15:05.525 "copy": false, 00:15:05.525 "nvme_iov_md": false 00:15:05.525 }, 00:15:05.525 "driver_specific": { 00:15:05.525 "raid": { 00:15:05.525 "uuid": "fdef4ab1-a236-44d6-b384-0904a4b3588c", 00:15:05.525 "strip_size_kb": 64, 00:15:05.525 "state": "online", 00:15:05.525 "raid_level": "raid5f", 00:15:05.525 "superblock": true, 00:15:05.525 "num_base_bdevs": 3, 00:15:05.525 "num_base_bdevs_discovered": 3, 00:15:05.525 "num_base_bdevs_operational": 3, 00:15:05.525 "base_bdevs_list": [ 00:15:05.525 { 00:15:05.525 "name": "NewBaseBdev", 00:15:05.525 "uuid": "662a9f3e-a568-4780-94d0-513573dfac7e", 00:15:05.525 "is_configured": true, 00:15:05.525 "data_offset": 2048, 00:15:05.525 "data_size": 63488 00:15:05.525 }, 00:15:05.525 { 00:15:05.525 "name": "BaseBdev2", 00:15:05.525 "uuid": "1f208a7b-91e0-46d3-adf2-25f8de35710c", 00:15:05.525 "is_configured": true, 00:15:05.525 "data_offset": 2048, 00:15:05.525 "data_size": 63488 00:15:05.525 }, 00:15:05.525 { 00:15:05.525 "name": "BaseBdev3", 00:15:05.525 "uuid": "dc0300da-7523-4c21-9ce4-2723d88c46d7", 00:15:05.525 "is_configured": true, 00:15:05.525 "data_offset": 2048, 00:15:05.525 "data_size": 63488 00:15:05.525 } 00:15:05.525 ] 00:15:05.525 } 00:15:05.525 } 00:15:05.525 }' 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:05.525 BaseBdev2 00:15:05.525 BaseBdev3' 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.525 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.526 [2024-12-07 17:31:38.893337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.526 [2024-12-07 17:31:38.893366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.526 [2024-12-07 17:31:38.893436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.526 [2024-12-07 17:31:38.893728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.526 [2024-12-07 17:31:38.893742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80500 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80500 ']' 00:15:05.526 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80500 00:15:05.785 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:05.785 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.785 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80500 00:15:05.785 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.785 killing process with pid 80500 00:15:05.785 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.785 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80500' 00:15:05.785 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80500 00:15:05.785 [2024-12-07 17:31:38.943004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.785 17:31:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80500 00:15:06.045 [2024-12-07 17:31:39.250789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.428 17:31:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:07.428 00:15:07.428 real 0m10.785s 00:15:07.428 user 0m16.848s 00:15:07.428 sys 0m2.089s 00:15:07.428 17:31:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.428 ************************************ 00:15:07.428 END TEST raid5f_state_function_test_sb 00:15:07.428 ************************************ 00:15:07.428 17:31:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.428 17:31:40 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:07.428 17:31:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:07.428 17:31:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.428 17:31:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.428 ************************************ 00:15:07.428 START TEST raid5f_superblock_test 00:15:07.428 ************************************ 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81126 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81126 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81126 ']' 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.428 17:31:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.428 [2024-12-07 17:31:40.591321] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:07.428 [2024-12-07 17:31:40.591447] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81126 ] 00:15:07.428 [2024-12-07 17:31:40.763880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.689 [2024-12-07 17:31:40.895952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.949 [2024-12-07 17:31:41.125257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.949 [2024-12-07 17:31:41.125301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.209 malloc1 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.209 [2024-12-07 17:31:41.486445] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:08.209 [2024-12-07 17:31:41.486582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.209 [2024-12-07 17:31:41.486624] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.209 [2024-12-07 17:31:41.486651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.209 [2024-12-07 17:31:41.488984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.209 [2024-12-07 17:31:41.489053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:08.209 pt1 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.209 malloc2 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.209 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.209 [2024-12-07 17:31:41.550169] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.209 [2024-12-07 17:31:41.550224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.209 [2024-12-07 17:31:41.550250] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:08.209 [2024-12-07 17:31:41.550259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.209 [2024-12-07 17:31:41.552539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.210 [2024-12-07 17:31:41.552574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.210 pt2 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.210 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.470 malloc3 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.470 [2024-12-07 17:31:41.645526] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:08.470 [2024-12-07 17:31:41.645651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.470 [2024-12-07 17:31:41.645690] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:08.470 [2024-12-07 17:31:41.645719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.470 [2024-12-07 17:31:41.648015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.470 [2024-12-07 17:31:41.648083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:08.470 pt3 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.470 [2024-12-07 17:31:41.657558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:08.470 [2024-12-07 17:31:41.659564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:08.470 [2024-12-07 17:31:41.659665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:08.470 [2024-12-07 17:31:41.659867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:08.470 [2024-12-07 17:31:41.659924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:08.470 [2024-12-07 17:31:41.660182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:08.470 [2024-12-07 17:31:41.665881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:08.470 [2024-12-07 17:31:41.665962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:08.470 [2024-12-07 17:31:41.666183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.470 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.471 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.471 "name": "raid_bdev1", 00:15:08.471 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:08.471 "strip_size_kb": 64, 00:15:08.471 "state": "online", 00:15:08.471 "raid_level": "raid5f", 00:15:08.471 "superblock": true, 00:15:08.471 "num_base_bdevs": 3, 00:15:08.471 "num_base_bdevs_discovered": 3, 00:15:08.471 "num_base_bdevs_operational": 3, 00:15:08.471 "base_bdevs_list": [ 00:15:08.471 { 00:15:08.471 "name": "pt1", 00:15:08.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.471 "is_configured": true, 00:15:08.471 "data_offset": 2048, 00:15:08.471 "data_size": 63488 00:15:08.471 }, 00:15:08.471 { 00:15:08.471 "name": "pt2", 00:15:08.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.471 "is_configured": true, 00:15:08.471 "data_offset": 2048, 00:15:08.471 "data_size": 63488 00:15:08.471 }, 00:15:08.471 { 00:15:08.471 "name": "pt3", 00:15:08.471 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.471 "is_configured": true, 00:15:08.471 "data_offset": 2048, 00:15:08.471 "data_size": 63488 00:15:08.471 } 00:15:08.471 ] 00:15:08.471 }' 00:15:08.471 17:31:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.471 17:31:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.731 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:08.731 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:08.731 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.731 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.731 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.731 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.731 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.731 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.731 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.731 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.731 [2024-12-07 17:31:42.100529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.991 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.991 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.991 "name": "raid_bdev1", 00:15:08.991 "aliases": [ 00:15:08.991 "e4f36c90-4bf3-4a7d-9a22-9c79b7389510" 00:15:08.991 ], 00:15:08.991 "product_name": "Raid Volume", 00:15:08.991 "block_size": 512, 00:15:08.991 "num_blocks": 126976, 00:15:08.991 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:08.991 "assigned_rate_limits": { 00:15:08.991 "rw_ios_per_sec": 0, 00:15:08.991 "rw_mbytes_per_sec": 0, 00:15:08.991 "r_mbytes_per_sec": 0, 00:15:08.991 "w_mbytes_per_sec": 0 00:15:08.991 }, 00:15:08.991 "claimed": false, 00:15:08.991 "zoned": false, 00:15:08.991 "supported_io_types": { 00:15:08.991 "read": true, 00:15:08.991 "write": true, 00:15:08.991 "unmap": false, 00:15:08.991 "flush": false, 00:15:08.991 "reset": true, 00:15:08.991 "nvme_admin": false, 00:15:08.991 "nvme_io": false, 00:15:08.991 "nvme_io_md": false, 00:15:08.991 "write_zeroes": true, 00:15:08.991 "zcopy": false, 00:15:08.991 "get_zone_info": false, 00:15:08.991 "zone_management": false, 00:15:08.991 "zone_append": false, 00:15:08.991 "compare": false, 00:15:08.991 "compare_and_write": false, 00:15:08.991 "abort": false, 00:15:08.991 "seek_hole": false, 00:15:08.991 "seek_data": false, 00:15:08.991 "copy": false, 00:15:08.991 "nvme_iov_md": false 00:15:08.991 }, 00:15:08.991 "driver_specific": { 00:15:08.991 "raid": { 00:15:08.991 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:08.991 "strip_size_kb": 64, 00:15:08.991 "state": "online", 00:15:08.991 "raid_level": "raid5f", 00:15:08.991 "superblock": true, 00:15:08.991 "num_base_bdevs": 3, 00:15:08.991 "num_base_bdevs_discovered": 3, 00:15:08.991 "num_base_bdevs_operational": 3, 00:15:08.991 "base_bdevs_list": [ 00:15:08.992 { 00:15:08.992 "name": "pt1", 00:15:08.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.992 "is_configured": true, 00:15:08.992 "data_offset": 2048, 00:15:08.992 "data_size": 63488 00:15:08.992 }, 00:15:08.992 { 00:15:08.992 "name": "pt2", 00:15:08.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.992 "is_configured": true, 00:15:08.992 "data_offset": 2048, 00:15:08.992 "data_size": 63488 00:15:08.992 }, 00:15:08.992 { 00:15:08.992 "name": "pt3", 00:15:08.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.992 "is_configured": true, 00:15:08.992 "data_offset": 2048, 00:15:08.992 "data_size": 63488 00:15:08.992 } 00:15:08.992 ] 00:15:08.992 } 00:15:08.992 } 00:15:08.992 }' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:08.992 pt2 00:15:08.992 pt3' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.992 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:09.252 [2024-12-07 17:31:42.380058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e4f36c90-4bf3-4a7d-9a22-9c79b7389510 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e4f36c90-4bf3-4a7d-9a22-9c79b7389510 ']' 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.252 [2024-12-07 17:31:42.427797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.252 [2024-12-07 17:31:42.427822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.252 [2024-12-07 17:31:42.427888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.252 [2024-12-07 17:31:42.427970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.252 [2024-12-07 17:31:42.427982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.252 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.252 [2024-12-07 17:31:42.583592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:09.252 [2024-12-07 17:31:42.585565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:09.253 [2024-12-07 17:31:42.585609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:09.253 [2024-12-07 17:31:42.585654] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:09.253 [2024-12-07 17:31:42.585694] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:09.253 [2024-12-07 17:31:42.585711] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:09.253 [2024-12-07 17:31:42.585725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.253 [2024-12-07 17:31:42.585733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:09.253 request: 00:15:09.253 { 00:15:09.253 "name": "raid_bdev1", 00:15:09.253 "raid_level": "raid5f", 00:15:09.253 "base_bdevs": [ 00:15:09.253 "malloc1", 00:15:09.253 "malloc2", 00:15:09.253 "malloc3" 00:15:09.253 ], 00:15:09.253 "strip_size_kb": 64, 00:15:09.253 "superblock": false, 00:15:09.253 "method": "bdev_raid_create", 00:15:09.253 "req_id": 1 00:15:09.253 } 00:15:09.253 Got JSON-RPC error response 00:15:09.253 response: 00:15:09.253 { 00:15:09.253 "code": -17, 00:15:09.253 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:09.253 } 00:15:09.253 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:09.253 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:09.253 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:09.253 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:09.253 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:09.253 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.253 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.253 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.253 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:09.253 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.513 [2024-12-07 17:31:42.651534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:09.513 [2024-12-07 17:31:42.651625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.513 [2024-12-07 17:31:42.651658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:09.513 [2024-12-07 17:31:42.651684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.513 [2024-12-07 17:31:42.654103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.513 [2024-12-07 17:31:42.654169] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:09.513 [2024-12-07 17:31:42.654254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:09.513 [2024-12-07 17:31:42.654343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.513 pt1 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.513 "name": "raid_bdev1", 00:15:09.513 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:09.513 "strip_size_kb": 64, 00:15:09.513 "state": "configuring", 00:15:09.513 "raid_level": "raid5f", 00:15:09.513 "superblock": true, 00:15:09.513 "num_base_bdevs": 3, 00:15:09.513 "num_base_bdevs_discovered": 1, 00:15:09.513 "num_base_bdevs_operational": 3, 00:15:09.513 "base_bdevs_list": [ 00:15:09.513 { 00:15:09.513 "name": "pt1", 00:15:09.513 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.513 "is_configured": true, 00:15:09.513 "data_offset": 2048, 00:15:09.513 "data_size": 63488 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "name": null, 00:15:09.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.513 "is_configured": false, 00:15:09.513 "data_offset": 2048, 00:15:09.513 "data_size": 63488 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "name": null, 00:15:09.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.513 "is_configured": false, 00:15:09.513 "data_offset": 2048, 00:15:09.513 "data_size": 63488 00:15:09.513 } 00:15:09.513 ] 00:15:09.513 }' 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.513 17:31:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 [2024-12-07 17:31:43.115099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.774 [2024-12-07 17:31:43.115247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.774 [2024-12-07 17:31:43.115276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:09.774 [2024-12-07 17:31:43.115286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.774 [2024-12-07 17:31:43.115837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.774 [2024-12-07 17:31:43.115864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.774 [2024-12-07 17:31:43.115980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.774 [2024-12-07 17:31:43.116011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.774 pt2 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 [2024-12-07 17:31:43.127069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.774 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.034 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.034 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.034 "name": "raid_bdev1", 00:15:10.034 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:10.035 "strip_size_kb": 64, 00:15:10.035 "state": "configuring", 00:15:10.035 "raid_level": "raid5f", 00:15:10.035 "superblock": true, 00:15:10.035 "num_base_bdevs": 3, 00:15:10.035 "num_base_bdevs_discovered": 1, 00:15:10.035 "num_base_bdevs_operational": 3, 00:15:10.035 "base_bdevs_list": [ 00:15:10.035 { 00:15:10.035 "name": "pt1", 00:15:10.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.035 "is_configured": true, 00:15:10.035 "data_offset": 2048, 00:15:10.035 "data_size": 63488 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "name": null, 00:15:10.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.035 "is_configured": false, 00:15:10.035 "data_offset": 0, 00:15:10.035 "data_size": 63488 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "name": null, 00:15:10.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.035 "is_configured": false, 00:15:10.035 "data_offset": 2048, 00:15:10.035 "data_size": 63488 00:15:10.035 } 00:15:10.035 ] 00:15:10.035 }' 00:15:10.035 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.035 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.296 [2024-12-07 17:31:43.566223] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.296 [2024-12-07 17:31:43.566322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.296 [2024-12-07 17:31:43.566355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:10.296 [2024-12-07 17:31:43.566384] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.296 [2024-12-07 17:31:43.566840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.296 [2024-12-07 17:31:43.566899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.296 [2024-12-07 17:31:43.567004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:10.296 [2024-12-07 17:31:43.567058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.296 pt2 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.296 [2024-12-07 17:31:43.578197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.296 [2024-12-07 17:31:43.578276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.296 [2024-12-07 17:31:43.578301] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:10.296 [2024-12-07 17:31:43.578325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.296 [2024-12-07 17:31:43.578697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.296 [2024-12-07 17:31:43.578755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.296 [2024-12-07 17:31:43.578834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:10.296 [2024-12-07 17:31:43.578878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.296 [2024-12-07 17:31:43.579043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:10.296 [2024-12-07 17:31:43.579087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:10.296 [2024-12-07 17:31:43.579346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:10.296 [2024-12-07 17:31:43.584411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:10.296 [2024-12-07 17:31:43.584462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:10.296 [2024-12-07 17:31:43.584686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.296 pt3 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.296 "name": "raid_bdev1", 00:15:10.296 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:10.296 "strip_size_kb": 64, 00:15:10.296 "state": "online", 00:15:10.296 "raid_level": "raid5f", 00:15:10.296 "superblock": true, 00:15:10.296 "num_base_bdevs": 3, 00:15:10.296 "num_base_bdevs_discovered": 3, 00:15:10.296 "num_base_bdevs_operational": 3, 00:15:10.296 "base_bdevs_list": [ 00:15:10.296 { 00:15:10.296 "name": "pt1", 00:15:10.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.296 "is_configured": true, 00:15:10.296 "data_offset": 2048, 00:15:10.296 "data_size": 63488 00:15:10.296 }, 00:15:10.296 { 00:15:10.296 "name": "pt2", 00:15:10.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.296 "is_configured": true, 00:15:10.296 "data_offset": 2048, 00:15:10.296 "data_size": 63488 00:15:10.296 }, 00:15:10.296 { 00:15:10.296 "name": "pt3", 00:15:10.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.296 "is_configured": true, 00:15:10.296 "data_offset": 2048, 00:15:10.296 "data_size": 63488 00:15:10.296 } 00:15:10.296 ] 00:15:10.296 }' 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.296 17:31:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:10.866 [2024-12-07 17:31:44.050858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.866 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:10.866 "name": "raid_bdev1", 00:15:10.866 "aliases": [ 00:15:10.866 "e4f36c90-4bf3-4a7d-9a22-9c79b7389510" 00:15:10.866 ], 00:15:10.866 "product_name": "Raid Volume", 00:15:10.866 "block_size": 512, 00:15:10.866 "num_blocks": 126976, 00:15:10.866 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:10.866 "assigned_rate_limits": { 00:15:10.866 "rw_ios_per_sec": 0, 00:15:10.866 "rw_mbytes_per_sec": 0, 00:15:10.866 "r_mbytes_per_sec": 0, 00:15:10.866 "w_mbytes_per_sec": 0 00:15:10.866 }, 00:15:10.866 "claimed": false, 00:15:10.866 "zoned": false, 00:15:10.866 "supported_io_types": { 00:15:10.866 "read": true, 00:15:10.866 "write": true, 00:15:10.866 "unmap": false, 00:15:10.867 "flush": false, 00:15:10.867 "reset": true, 00:15:10.867 "nvme_admin": false, 00:15:10.867 "nvme_io": false, 00:15:10.867 "nvme_io_md": false, 00:15:10.867 "write_zeroes": true, 00:15:10.867 "zcopy": false, 00:15:10.867 "get_zone_info": false, 00:15:10.867 "zone_management": false, 00:15:10.867 "zone_append": false, 00:15:10.867 "compare": false, 00:15:10.867 "compare_and_write": false, 00:15:10.867 "abort": false, 00:15:10.867 "seek_hole": false, 00:15:10.867 "seek_data": false, 00:15:10.867 "copy": false, 00:15:10.867 "nvme_iov_md": false 00:15:10.867 }, 00:15:10.867 "driver_specific": { 00:15:10.867 "raid": { 00:15:10.867 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:10.867 "strip_size_kb": 64, 00:15:10.867 "state": "online", 00:15:10.867 "raid_level": "raid5f", 00:15:10.867 "superblock": true, 00:15:10.867 "num_base_bdevs": 3, 00:15:10.867 "num_base_bdevs_discovered": 3, 00:15:10.867 "num_base_bdevs_operational": 3, 00:15:10.867 "base_bdevs_list": [ 00:15:10.867 { 00:15:10.867 "name": "pt1", 00:15:10.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.867 "is_configured": true, 00:15:10.867 "data_offset": 2048, 00:15:10.867 "data_size": 63488 00:15:10.867 }, 00:15:10.867 { 00:15:10.867 "name": "pt2", 00:15:10.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.867 "is_configured": true, 00:15:10.867 "data_offset": 2048, 00:15:10.867 "data_size": 63488 00:15:10.867 }, 00:15:10.867 { 00:15:10.867 "name": "pt3", 00:15:10.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.867 "is_configured": true, 00:15:10.867 "data_offset": 2048, 00:15:10.867 "data_size": 63488 00:15:10.867 } 00:15:10.867 ] 00:15:10.867 } 00:15:10.867 } 00:15:10.867 }' 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:10.867 pt2 00:15:10.867 pt3' 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.867 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.127 [2024-12-07 17:31:44.354287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e4f36c90-4bf3-4a7d-9a22-9c79b7389510 '!=' e4f36c90-4bf3-4a7d-9a22-9c79b7389510 ']' 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.127 [2024-12-07 17:31:44.402098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.127 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.128 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.128 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.128 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.128 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.128 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.128 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.128 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.128 "name": "raid_bdev1", 00:15:11.128 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:11.128 "strip_size_kb": 64, 00:15:11.128 "state": "online", 00:15:11.128 "raid_level": "raid5f", 00:15:11.128 "superblock": true, 00:15:11.128 "num_base_bdevs": 3, 00:15:11.128 "num_base_bdevs_discovered": 2, 00:15:11.128 "num_base_bdevs_operational": 2, 00:15:11.128 "base_bdevs_list": [ 00:15:11.128 { 00:15:11.128 "name": null, 00:15:11.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.128 "is_configured": false, 00:15:11.128 "data_offset": 0, 00:15:11.128 "data_size": 63488 00:15:11.128 }, 00:15:11.128 { 00:15:11.128 "name": "pt2", 00:15:11.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.128 "is_configured": true, 00:15:11.128 "data_offset": 2048, 00:15:11.128 "data_size": 63488 00:15:11.128 }, 00:15:11.128 { 00:15:11.128 "name": "pt3", 00:15:11.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.128 "is_configured": true, 00:15:11.128 "data_offset": 2048, 00:15:11.128 "data_size": 63488 00:15:11.128 } 00:15:11.128 ] 00:15:11.128 }' 00:15:11.128 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.128 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 [2024-12-07 17:31:44.845409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.699 [2024-12-07 17:31:44.845506] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.699 [2024-12-07 17:31:44.845609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.699 [2024-12-07 17:31:44.845689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.699 [2024-12-07 17:31:44.845755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 [2024-12-07 17:31:44.933210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.699 [2024-12-07 17:31:44.933278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.699 [2024-12-07 17:31:44.933296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:11.699 [2024-12-07 17:31:44.933308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.699 [2024-12-07 17:31:44.935913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.699 [2024-12-07 17:31:44.935962] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.699 [2024-12-07 17:31:44.936053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:11.699 [2024-12-07 17:31:44.936106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.699 pt2 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.699 "name": "raid_bdev1", 00:15:11.699 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:11.699 "strip_size_kb": 64, 00:15:11.699 "state": "configuring", 00:15:11.699 "raid_level": "raid5f", 00:15:11.699 "superblock": true, 00:15:11.699 "num_base_bdevs": 3, 00:15:11.699 "num_base_bdevs_discovered": 1, 00:15:11.699 "num_base_bdevs_operational": 2, 00:15:11.699 "base_bdevs_list": [ 00:15:11.699 { 00:15:11.699 "name": null, 00:15:11.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.699 "is_configured": false, 00:15:11.699 "data_offset": 2048, 00:15:11.699 "data_size": 63488 00:15:11.699 }, 00:15:11.699 { 00:15:11.699 "name": "pt2", 00:15:11.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.699 "is_configured": true, 00:15:11.699 "data_offset": 2048, 00:15:11.699 "data_size": 63488 00:15:11.699 }, 00:15:11.699 { 00:15:11.699 "name": null, 00:15:11.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.699 "is_configured": false, 00:15:11.699 "data_offset": 2048, 00:15:11.699 "data_size": 63488 00:15:11.699 } 00:15:11.699 ] 00:15:11.699 }' 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.699 17:31:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.267 [2024-12-07 17:31:45.420359] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:12.267 [2024-12-07 17:31:45.420491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.267 [2024-12-07 17:31:45.420526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:12.267 [2024-12-07 17:31:45.420555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.267 [2024-12-07 17:31:45.421030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.267 [2024-12-07 17:31:45.421086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:12.267 [2024-12-07 17:31:45.421188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:12.267 [2024-12-07 17:31:45.421238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:12.267 [2024-12-07 17:31:45.421367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:12.267 [2024-12-07 17:31:45.421407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:12.267 [2024-12-07 17:31:45.421667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:12.267 [2024-12-07 17:31:45.426627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:12.267 [2024-12-07 17:31:45.426680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:12.267 [2024-12-07 17:31:45.427029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.267 pt3 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.267 "name": "raid_bdev1", 00:15:12.267 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:12.267 "strip_size_kb": 64, 00:15:12.267 "state": "online", 00:15:12.267 "raid_level": "raid5f", 00:15:12.267 "superblock": true, 00:15:12.267 "num_base_bdevs": 3, 00:15:12.267 "num_base_bdevs_discovered": 2, 00:15:12.267 "num_base_bdevs_operational": 2, 00:15:12.267 "base_bdevs_list": [ 00:15:12.267 { 00:15:12.267 "name": null, 00:15:12.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.267 "is_configured": false, 00:15:12.267 "data_offset": 2048, 00:15:12.267 "data_size": 63488 00:15:12.267 }, 00:15:12.267 { 00:15:12.267 "name": "pt2", 00:15:12.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.267 "is_configured": true, 00:15:12.267 "data_offset": 2048, 00:15:12.267 "data_size": 63488 00:15:12.267 }, 00:15:12.267 { 00:15:12.267 "name": "pt3", 00:15:12.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.267 "is_configured": true, 00:15:12.267 "data_offset": 2048, 00:15:12.267 "data_size": 63488 00:15:12.267 } 00:15:12.267 ] 00:15:12.267 }' 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.267 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.525 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.525 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.525 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.525 [2024-12-07 17:31:45.884862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.525 [2024-12-07 17:31:45.884892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.525 [2024-12-07 17:31:45.884959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.525 [2024-12-07 17:31:45.885016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.525 [2024-12-07 17:31:45.885025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:12.525 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.525 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.525 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.525 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.525 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:12.525 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.784 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.784 [2024-12-07 17:31:45.960770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:12.784 [2024-12-07 17:31:45.960822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.784 [2024-12-07 17:31:45.960839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:12.784 [2024-12-07 17:31:45.960848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.784 [2024-12-07 17:31:45.963327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.784 [2024-12-07 17:31:45.963360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:12.784 [2024-12-07 17:31:45.963424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:12.784 [2024-12-07 17:31:45.963496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:12.784 [2024-12-07 17:31:45.963638] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:12.785 [2024-12-07 17:31:45.963650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.785 [2024-12-07 17:31:45.963665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:12.785 [2024-12-07 17:31:45.963717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.785 pt1 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.785 17:31:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.785 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.785 "name": "raid_bdev1", 00:15:12.785 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:12.785 "strip_size_kb": 64, 00:15:12.785 "state": "configuring", 00:15:12.785 "raid_level": "raid5f", 00:15:12.785 "superblock": true, 00:15:12.785 "num_base_bdevs": 3, 00:15:12.785 "num_base_bdevs_discovered": 1, 00:15:12.785 "num_base_bdevs_operational": 2, 00:15:12.785 "base_bdevs_list": [ 00:15:12.785 { 00:15:12.785 "name": null, 00:15:12.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.785 "is_configured": false, 00:15:12.785 "data_offset": 2048, 00:15:12.785 "data_size": 63488 00:15:12.785 }, 00:15:12.785 { 00:15:12.785 "name": "pt2", 00:15:12.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.785 "is_configured": true, 00:15:12.785 "data_offset": 2048, 00:15:12.785 "data_size": 63488 00:15:12.785 }, 00:15:12.785 { 00:15:12.785 "name": null, 00:15:12.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.785 "is_configured": false, 00:15:12.785 "data_offset": 2048, 00:15:12.785 "data_size": 63488 00:15:12.785 } 00:15:12.785 ] 00:15:12.785 }' 00:15:12.785 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.785 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.043 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:13.043 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:13.043 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.043 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.043 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.303 [2024-12-07 17:31:46.447989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:13.303 [2024-12-07 17:31:46.448076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.303 [2024-12-07 17:31:46.448110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:13.303 [2024-12-07 17:31:46.448140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.303 [2024-12-07 17:31:46.448574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.303 [2024-12-07 17:31:46.448639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:13.303 [2024-12-07 17:31:46.448724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:13.303 [2024-12-07 17:31:46.448767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:13.303 [2024-12-07 17:31:46.448906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:13.303 [2024-12-07 17:31:46.448954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:13.303 [2024-12-07 17:31:46.449219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:13.303 [2024-12-07 17:31:46.454588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:13.303 [2024-12-07 17:31:46.454646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:13.303 [2024-12-07 17:31:46.454901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.303 pt3 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.303 "name": "raid_bdev1", 00:15:13.303 "uuid": "e4f36c90-4bf3-4a7d-9a22-9c79b7389510", 00:15:13.303 "strip_size_kb": 64, 00:15:13.303 "state": "online", 00:15:13.303 "raid_level": "raid5f", 00:15:13.303 "superblock": true, 00:15:13.303 "num_base_bdevs": 3, 00:15:13.303 "num_base_bdevs_discovered": 2, 00:15:13.303 "num_base_bdevs_operational": 2, 00:15:13.303 "base_bdevs_list": [ 00:15:13.303 { 00:15:13.303 "name": null, 00:15:13.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.303 "is_configured": false, 00:15:13.303 "data_offset": 2048, 00:15:13.303 "data_size": 63488 00:15:13.303 }, 00:15:13.303 { 00:15:13.303 "name": "pt2", 00:15:13.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.303 "is_configured": true, 00:15:13.303 "data_offset": 2048, 00:15:13.303 "data_size": 63488 00:15:13.303 }, 00:15:13.303 { 00:15:13.303 "name": "pt3", 00:15:13.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.303 "is_configured": true, 00:15:13.303 "data_offset": 2048, 00:15:13.303 "data_size": 63488 00:15:13.303 } 00:15:13.303 ] 00:15:13.303 }' 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.303 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.564 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:13.564 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.564 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.564 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:13.564 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.564 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:13.564 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.564 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.564 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.564 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:13.564 [2024-12-07 17:31:46.933049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.824 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.824 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e4f36c90-4bf3-4a7d-9a22-9c79b7389510 '!=' e4f36c90-4bf3-4a7d-9a22-9c79b7389510 ']' 00:15:13.824 17:31:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81126 00:15:13.824 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81126 ']' 00:15:13.824 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81126 00:15:13.824 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:13.824 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.824 17:31:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81126 00:15:13.824 17:31:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.824 17:31:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.824 17:31:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81126' 00:15:13.824 killing process with pid 81126 00:15:13.824 17:31:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81126 00:15:13.824 [2024-12-07 17:31:47.016869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.824 [2024-12-07 17:31:47.016955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.824 [2024-12-07 17:31:47.017011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.824 [2024-12-07 17:31:47.017023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:13.824 17:31:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81126 00:15:14.083 [2024-12-07 17:31:47.330718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.468 ************************************ 00:15:15.468 END TEST raid5f_superblock_test 00:15:15.468 ************************************ 00:15:15.468 17:31:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:15.468 00:15:15.468 real 0m7.992s 00:15:15.468 user 0m12.304s 00:15:15.468 sys 0m1.535s 00:15:15.468 17:31:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.468 17:31:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.468 17:31:48 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:15.468 17:31:48 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:15.468 17:31:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:15.468 17:31:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.468 17:31:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.468 ************************************ 00:15:15.468 START TEST raid5f_rebuild_test 00:15:15.468 ************************************ 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81570 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81570 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81570 ']' 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.468 17:31:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.468 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:15.468 Zero copy mechanism will not be used. 00:15:15.468 [2024-12-07 17:31:48.668908] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:15.468 [2024-12-07 17:31:48.669087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81570 ] 00:15:15.468 [2024-12-07 17:31:48.845044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.728 [2024-12-07 17:31:48.968701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.987 [2024-12-07 17:31:49.198363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.987 [2024-12-07 17:31:49.198489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.247 BaseBdev1_malloc 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.247 [2024-12-07 17:31:49.532760] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:16.247 [2024-12-07 17:31:49.532837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.247 [2024-12-07 17:31:49.532860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:16.247 [2024-12-07 17:31:49.532872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.247 [2024-12-07 17:31:49.535187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.247 [2024-12-07 17:31:49.535302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:16.247 BaseBdev1 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.247 BaseBdev2_malloc 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.247 [2024-12-07 17:31:49.592862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:16.247 [2024-12-07 17:31:49.593013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.247 [2024-12-07 17:31:49.593041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:16.247 [2024-12-07 17:31:49.593053] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.247 [2024-12-07 17:31:49.595315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.247 [2024-12-07 17:31:49.595353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:16.247 BaseBdev2 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.247 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.507 BaseBdev3_malloc 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.507 [2024-12-07 17:31:49.686102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:16.507 [2024-12-07 17:31:49.686154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.507 [2024-12-07 17:31:49.686175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:16.507 [2024-12-07 17:31:49.686186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.507 [2024-12-07 17:31:49.688520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.507 [2024-12-07 17:31:49.688593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:16.507 BaseBdev3 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.507 spare_malloc 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.507 spare_delay 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.507 [2024-12-07 17:31:49.760724] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.507 [2024-12-07 17:31:49.760778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.507 [2024-12-07 17:31:49.760794] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:16.507 [2024-12-07 17:31:49.760805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.507 [2024-12-07 17:31:49.763084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.507 [2024-12-07 17:31:49.763120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.507 spare 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.507 [2024-12-07 17:31:49.772772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.507 [2024-12-07 17:31:49.774760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.507 [2024-12-07 17:31:49.774821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.507 [2024-12-07 17:31:49.774902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:16.507 [2024-12-07 17:31:49.774912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:16.507 [2024-12-07 17:31:49.775162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:16.507 [2024-12-07 17:31:49.780665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:16.507 [2024-12-07 17:31:49.780687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:16.507 [2024-12-07 17:31:49.780859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.507 "name": "raid_bdev1", 00:15:16.507 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:16.507 "strip_size_kb": 64, 00:15:16.507 "state": "online", 00:15:16.507 "raid_level": "raid5f", 00:15:16.507 "superblock": false, 00:15:16.507 "num_base_bdevs": 3, 00:15:16.507 "num_base_bdevs_discovered": 3, 00:15:16.507 "num_base_bdevs_operational": 3, 00:15:16.507 "base_bdevs_list": [ 00:15:16.507 { 00:15:16.507 "name": "BaseBdev1", 00:15:16.507 "uuid": "b0a3858d-04c2-5c60-97e0-e0bc0837698f", 00:15:16.507 "is_configured": true, 00:15:16.507 "data_offset": 0, 00:15:16.507 "data_size": 65536 00:15:16.507 }, 00:15:16.507 { 00:15:16.507 "name": "BaseBdev2", 00:15:16.507 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:16.507 "is_configured": true, 00:15:16.507 "data_offset": 0, 00:15:16.507 "data_size": 65536 00:15:16.507 }, 00:15:16.507 { 00:15:16.507 "name": "BaseBdev3", 00:15:16.507 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:16.507 "is_configured": true, 00:15:16.507 "data_offset": 0, 00:15:16.507 "data_size": 65536 00:15:16.507 } 00:15:16.507 ] 00:15:16.507 }' 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.507 17:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.075 [2024-12-07 17:31:50.242886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:17.075 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:17.335 [2024-12-07 17:31:50.494301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:17.335 /dev/nbd0 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.335 1+0 records in 00:15:17.335 1+0 records out 00:15:17.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371262 s, 11.0 MB/s 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:17.335 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:17.595 512+0 records in 00:15:17.595 512+0 records out 00:15:17.595 67108864 bytes (67 MB, 64 MiB) copied, 0.382028 s, 176 MB/s 00:15:17.595 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:17.595 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.595 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:17.595 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.595 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:17.595 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.595 17:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:17.855 [2024-12-07 17:31:51.163127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.855 [2024-12-07 17:31:51.195629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.855 17:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.114 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.114 "name": "raid_bdev1", 00:15:18.114 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:18.114 "strip_size_kb": 64, 00:15:18.114 "state": "online", 00:15:18.114 "raid_level": "raid5f", 00:15:18.114 "superblock": false, 00:15:18.114 "num_base_bdevs": 3, 00:15:18.114 "num_base_bdevs_discovered": 2, 00:15:18.114 "num_base_bdevs_operational": 2, 00:15:18.114 "base_bdevs_list": [ 00:15:18.114 { 00:15:18.114 "name": null, 00:15:18.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.115 "is_configured": false, 00:15:18.115 "data_offset": 0, 00:15:18.115 "data_size": 65536 00:15:18.115 }, 00:15:18.115 { 00:15:18.115 "name": "BaseBdev2", 00:15:18.115 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:18.115 "is_configured": true, 00:15:18.115 "data_offset": 0, 00:15:18.115 "data_size": 65536 00:15:18.115 }, 00:15:18.115 { 00:15:18.115 "name": "BaseBdev3", 00:15:18.115 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:18.115 "is_configured": true, 00:15:18.115 "data_offset": 0, 00:15:18.115 "data_size": 65536 00:15:18.115 } 00:15:18.115 ] 00:15:18.115 }' 00:15:18.115 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.115 17:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.374 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:18.374 17:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.374 17:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.374 [2024-12-07 17:31:51.591031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.374 [2024-12-07 17:31:51.609253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:18.374 17:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.374 17:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:18.374 [2024-12-07 17:31:51.617714] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.314 "name": "raid_bdev1", 00:15:19.314 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:19.314 "strip_size_kb": 64, 00:15:19.314 "state": "online", 00:15:19.314 "raid_level": "raid5f", 00:15:19.314 "superblock": false, 00:15:19.314 "num_base_bdevs": 3, 00:15:19.314 "num_base_bdevs_discovered": 3, 00:15:19.314 "num_base_bdevs_operational": 3, 00:15:19.314 "process": { 00:15:19.314 "type": "rebuild", 00:15:19.314 "target": "spare", 00:15:19.314 "progress": { 00:15:19.314 "blocks": 20480, 00:15:19.314 "percent": 15 00:15:19.314 } 00:15:19.314 }, 00:15:19.314 "base_bdevs_list": [ 00:15:19.314 { 00:15:19.314 "name": "spare", 00:15:19.314 "uuid": "22073c9b-bec9-5c3f-a05a-1f0627af0655", 00:15:19.314 "is_configured": true, 00:15:19.314 "data_offset": 0, 00:15:19.314 "data_size": 65536 00:15:19.314 }, 00:15:19.314 { 00:15:19.314 "name": "BaseBdev2", 00:15:19.314 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:19.314 "is_configured": true, 00:15:19.314 "data_offset": 0, 00:15:19.314 "data_size": 65536 00:15:19.314 }, 00:15:19.314 { 00:15:19.314 "name": "BaseBdev3", 00:15:19.314 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:19.314 "is_configured": true, 00:15:19.314 "data_offset": 0, 00:15:19.314 "data_size": 65536 00:15:19.314 } 00:15:19.314 ] 00:15:19.314 }' 00:15:19.314 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.573 [2024-12-07 17:31:52.772533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.573 [2024-12-07 17:31:52.826292] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.573 [2024-12-07 17:31:52.826349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.573 [2024-12-07 17:31:52.826367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.573 [2024-12-07 17:31:52.826374] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.573 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.573 "name": "raid_bdev1", 00:15:19.573 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:19.573 "strip_size_kb": 64, 00:15:19.573 "state": "online", 00:15:19.573 "raid_level": "raid5f", 00:15:19.573 "superblock": false, 00:15:19.573 "num_base_bdevs": 3, 00:15:19.574 "num_base_bdevs_discovered": 2, 00:15:19.574 "num_base_bdevs_operational": 2, 00:15:19.574 "base_bdevs_list": [ 00:15:19.574 { 00:15:19.574 "name": null, 00:15:19.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.574 "is_configured": false, 00:15:19.574 "data_offset": 0, 00:15:19.574 "data_size": 65536 00:15:19.574 }, 00:15:19.574 { 00:15:19.574 "name": "BaseBdev2", 00:15:19.574 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:19.574 "is_configured": true, 00:15:19.574 "data_offset": 0, 00:15:19.574 "data_size": 65536 00:15:19.574 }, 00:15:19.574 { 00:15:19.574 "name": "BaseBdev3", 00:15:19.574 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:19.574 "is_configured": true, 00:15:19.574 "data_offset": 0, 00:15:19.574 "data_size": 65536 00:15:19.574 } 00:15:19.574 ] 00:15:19.574 }' 00:15:19.574 17:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.574 17:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.150 "name": "raid_bdev1", 00:15:20.150 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:20.150 "strip_size_kb": 64, 00:15:20.150 "state": "online", 00:15:20.150 "raid_level": "raid5f", 00:15:20.150 "superblock": false, 00:15:20.150 "num_base_bdevs": 3, 00:15:20.150 "num_base_bdevs_discovered": 2, 00:15:20.150 "num_base_bdevs_operational": 2, 00:15:20.150 "base_bdevs_list": [ 00:15:20.150 { 00:15:20.150 "name": null, 00:15:20.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.150 "is_configured": false, 00:15:20.150 "data_offset": 0, 00:15:20.150 "data_size": 65536 00:15:20.150 }, 00:15:20.150 { 00:15:20.150 "name": "BaseBdev2", 00:15:20.150 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:20.150 "is_configured": true, 00:15:20.150 "data_offset": 0, 00:15:20.150 "data_size": 65536 00:15:20.150 }, 00:15:20.150 { 00:15:20.150 "name": "BaseBdev3", 00:15:20.150 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:20.150 "is_configured": true, 00:15:20.150 "data_offset": 0, 00:15:20.150 "data_size": 65536 00:15:20.150 } 00:15:20.150 ] 00:15:20.150 }' 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.150 [2024-12-07 17:31:53.431405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.150 [2024-12-07 17:31:53.446953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.150 17:31:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:20.150 [2024-12-07 17:31:53.454301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.088 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.088 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.088 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.088 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.088 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.088 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.088 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.088 17:31:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.088 17:31:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.348 "name": "raid_bdev1", 00:15:21.348 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:21.348 "strip_size_kb": 64, 00:15:21.348 "state": "online", 00:15:21.348 "raid_level": "raid5f", 00:15:21.348 "superblock": false, 00:15:21.348 "num_base_bdevs": 3, 00:15:21.348 "num_base_bdevs_discovered": 3, 00:15:21.348 "num_base_bdevs_operational": 3, 00:15:21.348 "process": { 00:15:21.348 "type": "rebuild", 00:15:21.348 "target": "spare", 00:15:21.348 "progress": { 00:15:21.348 "blocks": 20480, 00:15:21.348 "percent": 15 00:15:21.348 } 00:15:21.348 }, 00:15:21.348 "base_bdevs_list": [ 00:15:21.348 { 00:15:21.348 "name": "spare", 00:15:21.348 "uuid": "22073c9b-bec9-5c3f-a05a-1f0627af0655", 00:15:21.348 "is_configured": true, 00:15:21.348 "data_offset": 0, 00:15:21.348 "data_size": 65536 00:15:21.348 }, 00:15:21.348 { 00:15:21.348 "name": "BaseBdev2", 00:15:21.348 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:21.348 "is_configured": true, 00:15:21.348 "data_offset": 0, 00:15:21.348 "data_size": 65536 00:15:21.348 }, 00:15:21.348 { 00:15:21.348 "name": "BaseBdev3", 00:15:21.348 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:21.348 "is_configured": true, 00:15:21.348 "data_offset": 0, 00:15:21.348 "data_size": 65536 00:15:21.348 } 00:15:21.348 ] 00:15:21.348 }' 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=545 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.348 "name": "raid_bdev1", 00:15:21.348 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:21.348 "strip_size_kb": 64, 00:15:21.348 "state": "online", 00:15:21.348 "raid_level": "raid5f", 00:15:21.348 "superblock": false, 00:15:21.348 "num_base_bdevs": 3, 00:15:21.348 "num_base_bdevs_discovered": 3, 00:15:21.348 "num_base_bdevs_operational": 3, 00:15:21.348 "process": { 00:15:21.348 "type": "rebuild", 00:15:21.348 "target": "spare", 00:15:21.348 "progress": { 00:15:21.348 "blocks": 22528, 00:15:21.348 "percent": 17 00:15:21.348 } 00:15:21.348 }, 00:15:21.348 "base_bdevs_list": [ 00:15:21.348 { 00:15:21.348 "name": "spare", 00:15:21.348 "uuid": "22073c9b-bec9-5c3f-a05a-1f0627af0655", 00:15:21.348 "is_configured": true, 00:15:21.348 "data_offset": 0, 00:15:21.348 "data_size": 65536 00:15:21.348 }, 00:15:21.348 { 00:15:21.348 "name": "BaseBdev2", 00:15:21.348 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:21.348 "is_configured": true, 00:15:21.348 "data_offset": 0, 00:15:21.348 "data_size": 65536 00:15:21.348 }, 00:15:21.348 { 00:15:21.348 "name": "BaseBdev3", 00:15:21.348 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:21.348 "is_configured": true, 00:15:21.348 "data_offset": 0, 00:15:21.348 "data_size": 65536 00:15:21.348 } 00:15:21.348 ] 00:15:21.348 }' 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.348 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.608 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.608 17:31:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.548 "name": "raid_bdev1", 00:15:22.548 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:22.548 "strip_size_kb": 64, 00:15:22.548 "state": "online", 00:15:22.548 "raid_level": "raid5f", 00:15:22.548 "superblock": false, 00:15:22.548 "num_base_bdevs": 3, 00:15:22.548 "num_base_bdevs_discovered": 3, 00:15:22.548 "num_base_bdevs_operational": 3, 00:15:22.548 "process": { 00:15:22.548 "type": "rebuild", 00:15:22.548 "target": "spare", 00:15:22.548 "progress": { 00:15:22.548 "blocks": 47104, 00:15:22.548 "percent": 35 00:15:22.548 } 00:15:22.548 }, 00:15:22.548 "base_bdevs_list": [ 00:15:22.548 { 00:15:22.548 "name": "spare", 00:15:22.548 "uuid": "22073c9b-bec9-5c3f-a05a-1f0627af0655", 00:15:22.548 "is_configured": true, 00:15:22.548 "data_offset": 0, 00:15:22.548 "data_size": 65536 00:15:22.548 }, 00:15:22.548 { 00:15:22.548 "name": "BaseBdev2", 00:15:22.548 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:22.548 "is_configured": true, 00:15:22.548 "data_offset": 0, 00:15:22.548 "data_size": 65536 00:15:22.548 }, 00:15:22.548 { 00:15:22.548 "name": "BaseBdev3", 00:15:22.548 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:22.548 "is_configured": true, 00:15:22.548 "data_offset": 0, 00:15:22.548 "data_size": 65536 00:15:22.548 } 00:15:22.548 ] 00:15:22.548 }' 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.548 17:31:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.930 "name": "raid_bdev1", 00:15:23.930 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:23.930 "strip_size_kb": 64, 00:15:23.930 "state": "online", 00:15:23.930 "raid_level": "raid5f", 00:15:23.930 "superblock": false, 00:15:23.930 "num_base_bdevs": 3, 00:15:23.930 "num_base_bdevs_discovered": 3, 00:15:23.930 "num_base_bdevs_operational": 3, 00:15:23.930 "process": { 00:15:23.930 "type": "rebuild", 00:15:23.930 "target": "spare", 00:15:23.930 "progress": { 00:15:23.930 "blocks": 69632, 00:15:23.930 "percent": 53 00:15:23.930 } 00:15:23.930 }, 00:15:23.930 "base_bdevs_list": [ 00:15:23.930 { 00:15:23.930 "name": "spare", 00:15:23.930 "uuid": "22073c9b-bec9-5c3f-a05a-1f0627af0655", 00:15:23.930 "is_configured": true, 00:15:23.930 "data_offset": 0, 00:15:23.930 "data_size": 65536 00:15:23.930 }, 00:15:23.930 { 00:15:23.930 "name": "BaseBdev2", 00:15:23.930 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:23.930 "is_configured": true, 00:15:23.930 "data_offset": 0, 00:15:23.930 "data_size": 65536 00:15:23.930 }, 00:15:23.930 { 00:15:23.930 "name": "BaseBdev3", 00:15:23.930 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:23.930 "is_configured": true, 00:15:23.930 "data_offset": 0, 00:15:23.930 "data_size": 65536 00:15:23.930 } 00:15:23.930 ] 00:15:23.930 }' 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.930 17:31:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.930 17:31:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.930 17:31:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:24.870 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.870 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.870 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.870 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.870 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.870 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.870 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.870 17:31:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.870 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.871 17:31:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.871 17:31:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.871 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.871 "name": "raid_bdev1", 00:15:24.871 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:24.871 "strip_size_kb": 64, 00:15:24.871 "state": "online", 00:15:24.871 "raid_level": "raid5f", 00:15:24.871 "superblock": false, 00:15:24.871 "num_base_bdevs": 3, 00:15:24.871 "num_base_bdevs_discovered": 3, 00:15:24.871 "num_base_bdevs_operational": 3, 00:15:24.871 "process": { 00:15:24.871 "type": "rebuild", 00:15:24.871 "target": "spare", 00:15:24.871 "progress": { 00:15:24.871 "blocks": 92160, 00:15:24.871 "percent": 70 00:15:24.871 } 00:15:24.871 }, 00:15:24.871 "base_bdevs_list": [ 00:15:24.871 { 00:15:24.871 "name": "spare", 00:15:24.871 "uuid": "22073c9b-bec9-5c3f-a05a-1f0627af0655", 00:15:24.871 "is_configured": true, 00:15:24.871 "data_offset": 0, 00:15:24.871 "data_size": 65536 00:15:24.871 }, 00:15:24.871 { 00:15:24.871 "name": "BaseBdev2", 00:15:24.871 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:24.871 "is_configured": true, 00:15:24.871 "data_offset": 0, 00:15:24.871 "data_size": 65536 00:15:24.871 }, 00:15:24.871 { 00:15:24.871 "name": "BaseBdev3", 00:15:24.871 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:24.871 "is_configured": true, 00:15:24.871 "data_offset": 0, 00:15:24.871 "data_size": 65536 00:15:24.871 } 00:15:24.871 ] 00:15:24.871 }' 00:15:24.871 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.871 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.871 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.871 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.871 17:31:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.810 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.810 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.810 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.810 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.810 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.810 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.810 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.811 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.811 17:31:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.811 17:31:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.070 17:31:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.070 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.070 "name": "raid_bdev1", 00:15:26.070 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:26.071 "strip_size_kb": 64, 00:15:26.071 "state": "online", 00:15:26.071 "raid_level": "raid5f", 00:15:26.071 "superblock": false, 00:15:26.071 "num_base_bdevs": 3, 00:15:26.071 "num_base_bdevs_discovered": 3, 00:15:26.071 "num_base_bdevs_operational": 3, 00:15:26.071 "process": { 00:15:26.071 "type": "rebuild", 00:15:26.071 "target": "spare", 00:15:26.071 "progress": { 00:15:26.071 "blocks": 114688, 00:15:26.071 "percent": 87 00:15:26.071 } 00:15:26.071 }, 00:15:26.071 "base_bdevs_list": [ 00:15:26.071 { 00:15:26.071 "name": "spare", 00:15:26.071 "uuid": "22073c9b-bec9-5c3f-a05a-1f0627af0655", 00:15:26.071 "is_configured": true, 00:15:26.071 "data_offset": 0, 00:15:26.071 "data_size": 65536 00:15:26.071 }, 00:15:26.071 { 00:15:26.071 "name": "BaseBdev2", 00:15:26.071 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:26.071 "is_configured": true, 00:15:26.071 "data_offset": 0, 00:15:26.071 "data_size": 65536 00:15:26.071 }, 00:15:26.071 { 00:15:26.071 "name": "BaseBdev3", 00:15:26.071 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:26.071 "is_configured": true, 00:15:26.071 "data_offset": 0, 00:15:26.071 "data_size": 65536 00:15:26.071 } 00:15:26.071 ] 00:15:26.071 }' 00:15:26.071 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.071 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.071 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.071 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.071 17:31:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.644 [2024-12-07 17:31:59.896759] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:26.644 [2024-12-07 17:31:59.896947] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:26.644 [2024-12-07 17:31:59.897017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.212 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.212 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.212 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.212 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.212 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.213 "name": "raid_bdev1", 00:15:27.213 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:27.213 "strip_size_kb": 64, 00:15:27.213 "state": "online", 00:15:27.213 "raid_level": "raid5f", 00:15:27.213 "superblock": false, 00:15:27.213 "num_base_bdevs": 3, 00:15:27.213 "num_base_bdevs_discovered": 3, 00:15:27.213 "num_base_bdevs_operational": 3, 00:15:27.213 "base_bdevs_list": [ 00:15:27.213 { 00:15:27.213 "name": "spare", 00:15:27.213 "uuid": "22073c9b-bec9-5c3f-a05a-1f0627af0655", 00:15:27.213 "is_configured": true, 00:15:27.213 "data_offset": 0, 00:15:27.213 "data_size": 65536 00:15:27.213 }, 00:15:27.213 { 00:15:27.213 "name": "BaseBdev2", 00:15:27.213 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:27.213 "is_configured": true, 00:15:27.213 "data_offset": 0, 00:15:27.213 "data_size": 65536 00:15:27.213 }, 00:15:27.213 { 00:15:27.213 "name": "BaseBdev3", 00:15:27.213 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:27.213 "is_configured": true, 00:15:27.213 "data_offset": 0, 00:15:27.213 "data_size": 65536 00:15:27.213 } 00:15:27.213 ] 00:15:27.213 }' 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.213 "name": "raid_bdev1", 00:15:27.213 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:27.213 "strip_size_kb": 64, 00:15:27.213 "state": "online", 00:15:27.213 "raid_level": "raid5f", 00:15:27.213 "superblock": false, 00:15:27.213 "num_base_bdevs": 3, 00:15:27.213 "num_base_bdevs_discovered": 3, 00:15:27.213 "num_base_bdevs_operational": 3, 00:15:27.213 "base_bdevs_list": [ 00:15:27.213 { 00:15:27.213 "name": "spare", 00:15:27.213 "uuid": "22073c9b-bec9-5c3f-a05a-1f0627af0655", 00:15:27.213 "is_configured": true, 00:15:27.213 "data_offset": 0, 00:15:27.213 "data_size": 65536 00:15:27.213 }, 00:15:27.213 { 00:15:27.213 "name": "BaseBdev2", 00:15:27.213 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:27.213 "is_configured": true, 00:15:27.213 "data_offset": 0, 00:15:27.213 "data_size": 65536 00:15:27.213 }, 00:15:27.213 { 00:15:27.213 "name": "BaseBdev3", 00:15:27.213 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:27.213 "is_configured": true, 00:15:27.213 "data_offset": 0, 00:15:27.213 "data_size": 65536 00:15:27.213 } 00:15:27.213 ] 00:15:27.213 }' 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.213 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.472 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.472 "name": "raid_bdev1", 00:15:27.472 "uuid": "ac7aee7e-0cfa-421f-a2c9-8a3754e463f1", 00:15:27.472 "strip_size_kb": 64, 00:15:27.472 "state": "online", 00:15:27.472 "raid_level": "raid5f", 00:15:27.472 "superblock": false, 00:15:27.472 "num_base_bdevs": 3, 00:15:27.472 "num_base_bdevs_discovered": 3, 00:15:27.472 "num_base_bdevs_operational": 3, 00:15:27.472 "base_bdevs_list": [ 00:15:27.472 { 00:15:27.472 "name": "spare", 00:15:27.472 "uuid": "22073c9b-bec9-5c3f-a05a-1f0627af0655", 00:15:27.472 "is_configured": true, 00:15:27.472 "data_offset": 0, 00:15:27.472 "data_size": 65536 00:15:27.472 }, 00:15:27.472 { 00:15:27.472 "name": "BaseBdev2", 00:15:27.472 "uuid": "c7827071-20dc-5573-9ad9-87450226e046", 00:15:27.472 "is_configured": true, 00:15:27.472 "data_offset": 0, 00:15:27.472 "data_size": 65536 00:15:27.472 }, 00:15:27.472 { 00:15:27.472 "name": "BaseBdev3", 00:15:27.472 "uuid": "5d0e2b24-6a14-5717-a9df-6aa74d585f6c", 00:15:27.472 "is_configured": true, 00:15:27.472 "data_offset": 0, 00:15:27.472 "data_size": 65536 00:15:27.472 } 00:15:27.472 ] 00:15:27.472 }' 00:15:27.472 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.472 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.732 17:32:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:27.732 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.732 17:32:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.732 [2024-12-07 17:32:01.000579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.732 [2024-12-07 17:32:01.000656] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.732 [2024-12-07 17:32:01.000766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.732 [2024-12-07 17:32:01.000865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.732 [2024-12-07 17:32:01.000917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.732 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:28.029 /dev/nbd0 00:15:28.029 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:28.029 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:28.029 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:28.029 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.030 1+0 records in 00:15:28.030 1+0 records out 00:15:28.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430706 s, 9.5 MB/s 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.030 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:28.299 /dev/nbd1 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.299 1+0 records in 00:15:28.299 1+0 records out 00:15:28.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371082 s, 11.0 MB/s 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.299 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:28.559 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:28.559 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.559 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.559 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:28.559 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:28.559 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.559 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:28.818 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.818 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.818 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.818 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.818 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.818 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.818 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:28.818 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.819 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.819 17:32:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81570 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81570 ']' 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81570 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.819 17:32:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81570 00:15:29.078 17:32:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.078 17:32:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.078 17:32:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81570' 00:15:29.078 killing process with pid 81570 00:15:29.078 17:32:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81570 00:15:29.078 Received shutdown signal, test time was about 60.000000 seconds 00:15:29.078 00:15:29.078 Latency(us) 00:15:29.078 [2024-12-07T17:32:02.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.078 [2024-12-07T17:32:02.460Z] =================================================================================================================== 00:15:29.078 [2024-12-07T17:32:02.460Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:29.078 [2024-12-07 17:32:02.203180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.078 17:32:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81570 00:15:29.339 [2024-12-07 17:32:02.585539] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.280 ************************************ 00:15:30.280 END TEST raid5f_rebuild_test 00:15:30.280 ************************************ 00:15:30.280 17:32:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:30.280 00:15:30.280 real 0m15.069s 00:15:30.280 user 0m18.348s 00:15:30.280 sys 0m2.030s 00:15:30.280 17:32:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.280 17:32:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.540 17:32:03 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:30.540 17:32:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:30.541 17:32:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.541 17:32:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.541 ************************************ 00:15:30.541 START TEST raid5f_rebuild_test_sb 00:15:30.541 ************************************ 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82010 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82010 00:15:30.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82010 ']' 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.541 17:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.541 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:30.541 Zero copy mechanism will not be used. 00:15:30.541 [2024-12-07 17:32:03.804076] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:30.541 [2024-12-07 17:32:03.804259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82010 ] 00:15:30.801 [2024-12-07 17:32:03.976846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.801 [2024-12-07 17:32:04.083956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.062 [2024-12-07 17:32:04.279405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.062 [2024-12-07 17:32:04.279548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.323 BaseBdev1_malloc 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.323 [2024-12-07 17:32:04.666255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:31.323 [2024-12-07 17:32:04.666367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.323 [2024-12-07 17:32:04.666407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.323 [2024-12-07 17:32:04.666437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.323 [2024-12-07 17:32:04.668499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.323 [2024-12-07 17:32:04.668592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:31.323 BaseBdev1 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.323 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.584 BaseBdev2_malloc 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.584 [2024-12-07 17:32:04.721307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:31.584 [2024-12-07 17:32:04.721404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.584 [2024-12-07 17:32:04.721446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:31.584 [2024-12-07 17:32:04.721478] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.584 [2024-12-07 17:32:04.723556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.584 [2024-12-07 17:32:04.723631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:31.584 BaseBdev2 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.584 BaseBdev3_malloc 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.584 [2024-12-07 17:32:04.806834] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:31.584 [2024-12-07 17:32:04.806883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.584 [2024-12-07 17:32:04.806905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:31.584 [2024-12-07 17:32:04.806915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.584 [2024-12-07 17:32:04.812259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.584 [2024-12-07 17:32:04.812337] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:31.584 BaseBdev3 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.584 spare_malloc 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.584 spare_delay 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.584 [2024-12-07 17:32:04.880236] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.584 [2024-12-07 17:32:04.880329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.584 [2024-12-07 17:32:04.880364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:31.584 [2024-12-07 17:32:04.880398] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.584 [2024-12-07 17:32:04.882405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.584 [2024-12-07 17:32:04.882481] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.584 spare 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.584 [2024-12-07 17:32:04.892281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.584 [2024-12-07 17:32:04.894037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.584 [2024-12-07 17:32:04.894116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.584 [2024-12-07 17:32:04.894349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:31.584 [2024-12-07 17:32:04.894398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:31.584 [2024-12-07 17:32:04.894688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:31.584 [2024-12-07 17:32:04.900491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:31.584 [2024-12-07 17:32:04.900553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:31.584 [2024-12-07 17:32:04.900791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.584 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.585 "name": "raid_bdev1", 00:15:31.585 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:31.585 "strip_size_kb": 64, 00:15:31.585 "state": "online", 00:15:31.585 "raid_level": "raid5f", 00:15:31.585 "superblock": true, 00:15:31.585 "num_base_bdevs": 3, 00:15:31.585 "num_base_bdevs_discovered": 3, 00:15:31.585 "num_base_bdevs_operational": 3, 00:15:31.585 "base_bdevs_list": [ 00:15:31.585 { 00:15:31.585 "name": "BaseBdev1", 00:15:31.585 "uuid": "78ad4a2b-57aa-526e-b59a-23d70d8fcca0", 00:15:31.585 "is_configured": true, 00:15:31.585 "data_offset": 2048, 00:15:31.585 "data_size": 63488 00:15:31.585 }, 00:15:31.585 { 00:15:31.585 "name": "BaseBdev2", 00:15:31.585 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:31.585 "is_configured": true, 00:15:31.585 "data_offset": 2048, 00:15:31.585 "data_size": 63488 00:15:31.585 }, 00:15:31.585 { 00:15:31.585 "name": "BaseBdev3", 00:15:31.585 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:31.585 "is_configured": true, 00:15:31.585 "data_offset": 2048, 00:15:31.585 "data_size": 63488 00:15:31.585 } 00:15:31.585 ] 00:15:31.585 }' 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.585 17:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.158 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:32.158 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.158 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.158 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.158 [2024-12-07 17:32:05.310667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.158 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.158 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:32.158 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:32.158 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.158 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.158 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.159 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:32.417 [2024-12-07 17:32:05.570110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:32.417 /dev/nbd0 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.417 1+0 records in 00:15:32.417 1+0 records out 00:15:32.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051258 s, 8.0 MB/s 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:32.417 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:32.675 496+0 records in 00:15:32.675 496+0 records out 00:15:32.675 65011712 bytes (65 MB, 62 MiB) copied, 0.339012 s, 192 MB/s 00:15:32.675 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:32.675 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.675 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:32.675 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.675 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:32.675 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.675 17:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.934 [2024-12-07 17:32:06.192963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.934 [2024-12-07 17:32:06.208834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.934 "name": "raid_bdev1", 00:15:32.934 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:32.934 "strip_size_kb": 64, 00:15:32.934 "state": "online", 00:15:32.934 "raid_level": "raid5f", 00:15:32.934 "superblock": true, 00:15:32.934 "num_base_bdevs": 3, 00:15:32.934 "num_base_bdevs_discovered": 2, 00:15:32.934 "num_base_bdevs_operational": 2, 00:15:32.934 "base_bdevs_list": [ 00:15:32.934 { 00:15:32.934 "name": null, 00:15:32.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.934 "is_configured": false, 00:15:32.934 "data_offset": 0, 00:15:32.934 "data_size": 63488 00:15:32.934 }, 00:15:32.934 { 00:15:32.934 "name": "BaseBdev2", 00:15:32.934 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:32.934 "is_configured": true, 00:15:32.934 "data_offset": 2048, 00:15:32.934 "data_size": 63488 00:15:32.934 }, 00:15:32.934 { 00:15:32.934 "name": "BaseBdev3", 00:15:32.934 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:32.934 "is_configured": true, 00:15:32.934 "data_offset": 2048, 00:15:32.934 "data_size": 63488 00:15:32.934 } 00:15:32.934 ] 00:15:32.934 }' 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.934 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.502 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:33.502 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.502 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.502 [2024-12-07 17:32:06.656066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.502 [2024-12-07 17:32:06.672318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:33.502 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.502 17:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:33.502 [2024-12-07 17:32:06.679975] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:34.441 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.441 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.441 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.441 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.442 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.442 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.442 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.442 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.442 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.442 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.442 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.442 "name": "raid_bdev1", 00:15:34.442 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:34.442 "strip_size_kb": 64, 00:15:34.442 "state": "online", 00:15:34.442 "raid_level": "raid5f", 00:15:34.442 "superblock": true, 00:15:34.442 "num_base_bdevs": 3, 00:15:34.442 "num_base_bdevs_discovered": 3, 00:15:34.442 "num_base_bdevs_operational": 3, 00:15:34.442 "process": { 00:15:34.442 "type": "rebuild", 00:15:34.442 "target": "spare", 00:15:34.442 "progress": { 00:15:34.442 "blocks": 20480, 00:15:34.442 "percent": 16 00:15:34.442 } 00:15:34.442 }, 00:15:34.442 "base_bdevs_list": [ 00:15:34.442 { 00:15:34.442 "name": "spare", 00:15:34.442 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:34.442 "is_configured": true, 00:15:34.442 "data_offset": 2048, 00:15:34.442 "data_size": 63488 00:15:34.442 }, 00:15:34.442 { 00:15:34.442 "name": "BaseBdev2", 00:15:34.442 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:34.442 "is_configured": true, 00:15:34.442 "data_offset": 2048, 00:15:34.442 "data_size": 63488 00:15:34.442 }, 00:15:34.442 { 00:15:34.442 "name": "BaseBdev3", 00:15:34.442 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:34.442 "is_configured": true, 00:15:34.442 "data_offset": 2048, 00:15:34.442 "data_size": 63488 00:15:34.442 } 00:15:34.442 ] 00:15:34.442 }' 00:15:34.442 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.442 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.442 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.702 [2024-12-07 17:32:07.835340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.702 [2024-12-07 17:32:07.888066] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:34.702 [2024-12-07 17:32:07.888123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.702 [2024-12-07 17:32:07.888142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.702 [2024-12-07 17:32:07.888150] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.702 "name": "raid_bdev1", 00:15:34.702 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:34.702 "strip_size_kb": 64, 00:15:34.702 "state": "online", 00:15:34.702 "raid_level": "raid5f", 00:15:34.702 "superblock": true, 00:15:34.702 "num_base_bdevs": 3, 00:15:34.702 "num_base_bdevs_discovered": 2, 00:15:34.702 "num_base_bdevs_operational": 2, 00:15:34.702 "base_bdevs_list": [ 00:15:34.702 { 00:15:34.702 "name": null, 00:15:34.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.702 "is_configured": false, 00:15:34.702 "data_offset": 0, 00:15:34.702 "data_size": 63488 00:15:34.702 }, 00:15:34.702 { 00:15:34.702 "name": "BaseBdev2", 00:15:34.702 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:34.702 "is_configured": true, 00:15:34.702 "data_offset": 2048, 00:15:34.702 "data_size": 63488 00:15:34.702 }, 00:15:34.702 { 00:15:34.702 "name": "BaseBdev3", 00:15:34.702 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:34.702 "is_configured": true, 00:15:34.702 "data_offset": 2048, 00:15:34.702 "data_size": 63488 00:15:34.702 } 00:15:34.702 ] 00:15:34.702 }' 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.702 17:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.272 "name": "raid_bdev1", 00:15:35.272 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:35.272 "strip_size_kb": 64, 00:15:35.272 "state": "online", 00:15:35.272 "raid_level": "raid5f", 00:15:35.272 "superblock": true, 00:15:35.272 "num_base_bdevs": 3, 00:15:35.272 "num_base_bdevs_discovered": 2, 00:15:35.272 "num_base_bdevs_operational": 2, 00:15:35.272 "base_bdevs_list": [ 00:15:35.272 { 00:15:35.272 "name": null, 00:15:35.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.272 "is_configured": false, 00:15:35.272 "data_offset": 0, 00:15:35.272 "data_size": 63488 00:15:35.272 }, 00:15:35.272 { 00:15:35.272 "name": "BaseBdev2", 00:15:35.272 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:35.272 "is_configured": true, 00:15:35.272 "data_offset": 2048, 00:15:35.272 "data_size": 63488 00:15:35.272 }, 00:15:35.272 { 00:15:35.272 "name": "BaseBdev3", 00:15:35.272 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:35.272 "is_configured": true, 00:15:35.272 "data_offset": 2048, 00:15:35.272 "data_size": 63488 00:15:35.272 } 00:15:35.272 ] 00:15:35.272 }' 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.272 [2024-12-07 17:32:08.526963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.272 [2024-12-07 17:32:08.542204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.272 17:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:35.272 [2024-12-07 17:32:08.549604] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.212 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.212 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.212 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.212 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.212 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.212 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.212 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.212 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.212 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.212 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.471 "name": "raid_bdev1", 00:15:36.471 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:36.471 "strip_size_kb": 64, 00:15:36.471 "state": "online", 00:15:36.471 "raid_level": "raid5f", 00:15:36.471 "superblock": true, 00:15:36.471 "num_base_bdevs": 3, 00:15:36.471 "num_base_bdevs_discovered": 3, 00:15:36.471 "num_base_bdevs_operational": 3, 00:15:36.471 "process": { 00:15:36.471 "type": "rebuild", 00:15:36.471 "target": "spare", 00:15:36.471 "progress": { 00:15:36.471 "blocks": 20480, 00:15:36.471 "percent": 16 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 "base_bdevs_list": [ 00:15:36.471 { 00:15:36.471 "name": "spare", 00:15:36.471 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:36.471 "is_configured": true, 00:15:36.471 "data_offset": 2048, 00:15:36.471 "data_size": 63488 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "name": "BaseBdev2", 00:15:36.471 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:36.471 "is_configured": true, 00:15:36.471 "data_offset": 2048, 00:15:36.471 "data_size": 63488 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "name": "BaseBdev3", 00:15:36.471 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:36.471 "is_configured": true, 00:15:36.471 "data_offset": 2048, 00:15:36.471 "data_size": 63488 00:15:36.471 } 00:15:36.471 ] 00:15:36.471 }' 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:36.471 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=560 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.471 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.471 "name": "raid_bdev1", 00:15:36.471 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:36.471 "strip_size_kb": 64, 00:15:36.471 "state": "online", 00:15:36.471 "raid_level": "raid5f", 00:15:36.471 "superblock": true, 00:15:36.471 "num_base_bdevs": 3, 00:15:36.471 "num_base_bdevs_discovered": 3, 00:15:36.471 "num_base_bdevs_operational": 3, 00:15:36.471 "process": { 00:15:36.471 "type": "rebuild", 00:15:36.471 "target": "spare", 00:15:36.471 "progress": { 00:15:36.471 "blocks": 22528, 00:15:36.471 "percent": 17 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 "base_bdevs_list": [ 00:15:36.471 { 00:15:36.471 "name": "spare", 00:15:36.471 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:36.472 "is_configured": true, 00:15:36.472 "data_offset": 2048, 00:15:36.472 "data_size": 63488 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "name": "BaseBdev2", 00:15:36.472 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:36.472 "is_configured": true, 00:15:36.472 "data_offset": 2048, 00:15:36.472 "data_size": 63488 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "name": "BaseBdev3", 00:15:36.472 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:36.472 "is_configured": true, 00:15:36.472 "data_offset": 2048, 00:15:36.472 "data_size": 63488 00:15:36.472 } 00:15:36.472 ] 00:15:36.472 }' 00:15:36.472 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.472 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.472 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.472 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.472 17:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.848 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.848 "name": "raid_bdev1", 00:15:37.848 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:37.848 "strip_size_kb": 64, 00:15:37.848 "state": "online", 00:15:37.848 "raid_level": "raid5f", 00:15:37.848 "superblock": true, 00:15:37.848 "num_base_bdevs": 3, 00:15:37.848 "num_base_bdevs_discovered": 3, 00:15:37.848 "num_base_bdevs_operational": 3, 00:15:37.848 "process": { 00:15:37.848 "type": "rebuild", 00:15:37.848 "target": "spare", 00:15:37.848 "progress": { 00:15:37.848 "blocks": 45056, 00:15:37.848 "percent": 35 00:15:37.848 } 00:15:37.848 }, 00:15:37.848 "base_bdevs_list": [ 00:15:37.848 { 00:15:37.848 "name": "spare", 00:15:37.848 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:37.848 "is_configured": true, 00:15:37.848 "data_offset": 2048, 00:15:37.848 "data_size": 63488 00:15:37.848 }, 00:15:37.848 { 00:15:37.848 "name": "BaseBdev2", 00:15:37.848 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:37.848 "is_configured": true, 00:15:37.848 "data_offset": 2048, 00:15:37.848 "data_size": 63488 00:15:37.848 }, 00:15:37.848 { 00:15:37.848 "name": "BaseBdev3", 00:15:37.848 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:37.848 "is_configured": true, 00:15:37.848 "data_offset": 2048, 00:15:37.848 "data_size": 63488 00:15:37.848 } 00:15:37.848 ] 00:15:37.848 }' 00:15:37.849 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.849 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.849 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.849 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.849 17:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.785 17:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.785 17:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.785 "name": "raid_bdev1", 00:15:38.785 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:38.785 "strip_size_kb": 64, 00:15:38.785 "state": "online", 00:15:38.785 "raid_level": "raid5f", 00:15:38.785 "superblock": true, 00:15:38.785 "num_base_bdevs": 3, 00:15:38.785 "num_base_bdevs_discovered": 3, 00:15:38.785 "num_base_bdevs_operational": 3, 00:15:38.785 "process": { 00:15:38.785 "type": "rebuild", 00:15:38.785 "target": "spare", 00:15:38.785 "progress": { 00:15:38.785 "blocks": 67584, 00:15:38.785 "percent": 53 00:15:38.785 } 00:15:38.785 }, 00:15:38.785 "base_bdevs_list": [ 00:15:38.785 { 00:15:38.785 "name": "spare", 00:15:38.785 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:38.785 "is_configured": true, 00:15:38.785 "data_offset": 2048, 00:15:38.785 "data_size": 63488 00:15:38.785 }, 00:15:38.785 { 00:15:38.785 "name": "BaseBdev2", 00:15:38.785 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:38.785 "is_configured": true, 00:15:38.785 "data_offset": 2048, 00:15:38.785 "data_size": 63488 00:15:38.785 }, 00:15:38.786 { 00:15:38.786 "name": "BaseBdev3", 00:15:38.786 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:38.786 "is_configured": true, 00:15:38.786 "data_offset": 2048, 00:15:38.786 "data_size": 63488 00:15:38.786 } 00:15:38.786 ] 00:15:38.786 }' 00:15:38.786 17:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.786 17:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.786 17:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.786 17:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.786 17:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.721 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.722 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.722 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.722 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.722 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.722 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.722 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.722 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.722 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.722 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.980 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.980 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.980 "name": "raid_bdev1", 00:15:39.980 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:39.980 "strip_size_kb": 64, 00:15:39.980 "state": "online", 00:15:39.980 "raid_level": "raid5f", 00:15:39.980 "superblock": true, 00:15:39.980 "num_base_bdevs": 3, 00:15:39.980 "num_base_bdevs_discovered": 3, 00:15:39.980 "num_base_bdevs_operational": 3, 00:15:39.980 "process": { 00:15:39.980 "type": "rebuild", 00:15:39.980 "target": "spare", 00:15:39.980 "progress": { 00:15:39.980 "blocks": 92160, 00:15:39.980 "percent": 72 00:15:39.980 } 00:15:39.980 }, 00:15:39.980 "base_bdevs_list": [ 00:15:39.980 { 00:15:39.980 "name": "spare", 00:15:39.980 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:39.980 "is_configured": true, 00:15:39.980 "data_offset": 2048, 00:15:39.980 "data_size": 63488 00:15:39.980 }, 00:15:39.980 { 00:15:39.980 "name": "BaseBdev2", 00:15:39.980 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:39.980 "is_configured": true, 00:15:39.980 "data_offset": 2048, 00:15:39.980 "data_size": 63488 00:15:39.980 }, 00:15:39.980 { 00:15:39.980 "name": "BaseBdev3", 00:15:39.980 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:39.980 "is_configured": true, 00:15:39.980 "data_offset": 2048, 00:15:39.980 "data_size": 63488 00:15:39.980 } 00:15:39.980 ] 00:15:39.980 }' 00:15:39.980 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.980 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.980 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.980 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.980 17:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.916 "name": "raid_bdev1", 00:15:40.916 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:40.916 "strip_size_kb": 64, 00:15:40.916 "state": "online", 00:15:40.916 "raid_level": "raid5f", 00:15:40.916 "superblock": true, 00:15:40.916 "num_base_bdevs": 3, 00:15:40.916 "num_base_bdevs_discovered": 3, 00:15:40.916 "num_base_bdevs_operational": 3, 00:15:40.916 "process": { 00:15:40.916 "type": "rebuild", 00:15:40.916 "target": "spare", 00:15:40.916 "progress": { 00:15:40.916 "blocks": 114688, 00:15:40.916 "percent": 90 00:15:40.916 } 00:15:40.916 }, 00:15:40.916 "base_bdevs_list": [ 00:15:40.916 { 00:15:40.916 "name": "spare", 00:15:40.916 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:40.916 "is_configured": true, 00:15:40.916 "data_offset": 2048, 00:15:40.916 "data_size": 63488 00:15:40.916 }, 00:15:40.916 { 00:15:40.916 "name": "BaseBdev2", 00:15:40.916 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:40.916 "is_configured": true, 00:15:40.916 "data_offset": 2048, 00:15:40.916 "data_size": 63488 00:15:40.916 }, 00:15:40.916 { 00:15:40.916 "name": "BaseBdev3", 00:15:40.916 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:40.916 "is_configured": true, 00:15:40.916 "data_offset": 2048, 00:15:40.916 "data_size": 63488 00:15:40.916 } 00:15:40.916 ] 00:15:40.916 }' 00:15:40.916 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.176 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.176 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.176 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.176 17:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.447 [2024-12-07 17:32:14.791755] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:41.447 [2024-12-07 17:32:14.791875] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:41.447 [2024-12-07 17:32:14.792043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.031 "name": "raid_bdev1", 00:15:42.031 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:42.031 "strip_size_kb": 64, 00:15:42.031 "state": "online", 00:15:42.031 "raid_level": "raid5f", 00:15:42.031 "superblock": true, 00:15:42.031 "num_base_bdevs": 3, 00:15:42.031 "num_base_bdevs_discovered": 3, 00:15:42.031 "num_base_bdevs_operational": 3, 00:15:42.031 "base_bdevs_list": [ 00:15:42.031 { 00:15:42.031 "name": "spare", 00:15:42.031 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:42.031 "is_configured": true, 00:15:42.031 "data_offset": 2048, 00:15:42.031 "data_size": 63488 00:15:42.031 }, 00:15:42.031 { 00:15:42.031 "name": "BaseBdev2", 00:15:42.031 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:42.031 "is_configured": true, 00:15:42.031 "data_offset": 2048, 00:15:42.031 "data_size": 63488 00:15:42.031 }, 00:15:42.031 { 00:15:42.031 "name": "BaseBdev3", 00:15:42.031 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:42.031 "is_configured": true, 00:15:42.031 "data_offset": 2048, 00:15:42.031 "data_size": 63488 00:15:42.031 } 00:15:42.031 ] 00:15:42.031 }' 00:15:42.031 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.290 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:42.290 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.290 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:42.290 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:42.290 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.290 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.291 "name": "raid_bdev1", 00:15:42.291 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:42.291 "strip_size_kb": 64, 00:15:42.291 "state": "online", 00:15:42.291 "raid_level": "raid5f", 00:15:42.291 "superblock": true, 00:15:42.291 "num_base_bdevs": 3, 00:15:42.291 "num_base_bdevs_discovered": 3, 00:15:42.291 "num_base_bdevs_operational": 3, 00:15:42.291 "base_bdevs_list": [ 00:15:42.291 { 00:15:42.291 "name": "spare", 00:15:42.291 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:42.291 "is_configured": true, 00:15:42.291 "data_offset": 2048, 00:15:42.291 "data_size": 63488 00:15:42.291 }, 00:15:42.291 { 00:15:42.291 "name": "BaseBdev2", 00:15:42.291 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:42.291 "is_configured": true, 00:15:42.291 "data_offset": 2048, 00:15:42.291 "data_size": 63488 00:15:42.291 }, 00:15:42.291 { 00:15:42.291 "name": "BaseBdev3", 00:15:42.291 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:42.291 "is_configured": true, 00:15:42.291 "data_offset": 2048, 00:15:42.291 "data_size": 63488 00:15:42.291 } 00:15:42.291 ] 00:15:42.291 }' 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.291 "name": "raid_bdev1", 00:15:42.291 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:42.291 "strip_size_kb": 64, 00:15:42.291 "state": "online", 00:15:42.291 "raid_level": "raid5f", 00:15:42.291 "superblock": true, 00:15:42.291 "num_base_bdevs": 3, 00:15:42.291 "num_base_bdevs_discovered": 3, 00:15:42.291 "num_base_bdevs_operational": 3, 00:15:42.291 "base_bdevs_list": [ 00:15:42.291 { 00:15:42.291 "name": "spare", 00:15:42.291 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:42.291 "is_configured": true, 00:15:42.291 "data_offset": 2048, 00:15:42.291 "data_size": 63488 00:15:42.291 }, 00:15:42.291 { 00:15:42.291 "name": "BaseBdev2", 00:15:42.291 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:42.291 "is_configured": true, 00:15:42.291 "data_offset": 2048, 00:15:42.291 "data_size": 63488 00:15:42.291 }, 00:15:42.291 { 00:15:42.291 "name": "BaseBdev3", 00:15:42.291 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:42.291 "is_configured": true, 00:15:42.291 "data_offset": 2048, 00:15:42.291 "data_size": 63488 00:15:42.291 } 00:15:42.291 ] 00:15:42.291 }' 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.291 17:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.861 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.862 [2024-12-07 17:32:16.018352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.862 [2024-12-07 17:32:16.018382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.862 [2024-12-07 17:32:16.018472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.862 [2024-12-07 17:32:16.018552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.862 [2024-12-07 17:32:16.018567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:42.862 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:43.122 /dev/nbd0 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.122 1+0 records in 00:15:43.122 1+0 records out 00:15:43.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509352 s, 8.0 MB/s 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.122 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:43.382 /dev/nbd1 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.382 1+0 records in 00:15:43.382 1+0 records out 00:15:43.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040966 s, 10.0 MB/s 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.382 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:43.642 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:43.642 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:43.642 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:43.642 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.642 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.642 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:43.642 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:43.642 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.642 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.642 17:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.902 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.903 [2024-12-07 17:32:17.168840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.903 [2024-12-07 17:32:17.168902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.903 [2024-12-07 17:32:17.168924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:43.903 [2024-12-07 17:32:17.168950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.903 [2024-12-07 17:32:17.171322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.903 [2024-12-07 17:32:17.171363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.903 [2024-12-07 17:32:17.171480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:43.903 [2024-12-07 17:32:17.171541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.903 [2024-12-07 17:32:17.171690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.903 [2024-12-07 17:32:17.171793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.903 spare 00:15:43.903 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.903 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:43.903 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.903 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.903 [2024-12-07 17:32:17.271695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:43.903 [2024-12-07 17:32:17.271725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:43.903 [2024-12-07 17:32:17.272025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:43.903 [2024-12-07 17:32:17.277599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:43.903 [2024-12-07 17:32:17.277621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:43.903 [2024-12-07 17:32:17.277822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.163 "name": "raid_bdev1", 00:15:44.163 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:44.163 "strip_size_kb": 64, 00:15:44.163 "state": "online", 00:15:44.163 "raid_level": "raid5f", 00:15:44.163 "superblock": true, 00:15:44.163 "num_base_bdevs": 3, 00:15:44.163 "num_base_bdevs_discovered": 3, 00:15:44.163 "num_base_bdevs_operational": 3, 00:15:44.163 "base_bdevs_list": [ 00:15:44.163 { 00:15:44.163 "name": "spare", 00:15:44.163 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:44.163 "is_configured": true, 00:15:44.163 "data_offset": 2048, 00:15:44.163 "data_size": 63488 00:15:44.163 }, 00:15:44.163 { 00:15:44.163 "name": "BaseBdev2", 00:15:44.163 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:44.163 "is_configured": true, 00:15:44.163 "data_offset": 2048, 00:15:44.163 "data_size": 63488 00:15:44.163 }, 00:15:44.163 { 00:15:44.163 "name": "BaseBdev3", 00:15:44.163 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:44.163 "is_configured": true, 00:15:44.163 "data_offset": 2048, 00:15:44.163 "data_size": 63488 00:15:44.163 } 00:15:44.163 ] 00:15:44.163 }' 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.163 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.424 "name": "raid_bdev1", 00:15:44.424 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:44.424 "strip_size_kb": 64, 00:15:44.424 "state": "online", 00:15:44.424 "raid_level": "raid5f", 00:15:44.424 "superblock": true, 00:15:44.424 "num_base_bdevs": 3, 00:15:44.424 "num_base_bdevs_discovered": 3, 00:15:44.424 "num_base_bdevs_operational": 3, 00:15:44.424 "base_bdevs_list": [ 00:15:44.424 { 00:15:44.424 "name": "spare", 00:15:44.424 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:44.424 "is_configured": true, 00:15:44.424 "data_offset": 2048, 00:15:44.424 "data_size": 63488 00:15:44.424 }, 00:15:44.424 { 00:15:44.424 "name": "BaseBdev2", 00:15:44.424 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:44.424 "is_configured": true, 00:15:44.424 "data_offset": 2048, 00:15:44.424 "data_size": 63488 00:15:44.424 }, 00:15:44.424 { 00:15:44.424 "name": "BaseBdev3", 00:15:44.424 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:44.424 "is_configured": true, 00:15:44.424 "data_offset": 2048, 00:15:44.424 "data_size": 63488 00:15:44.424 } 00:15:44.424 ] 00:15:44.424 }' 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.424 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.684 [2024-12-07 17:32:17.899568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.684 "name": "raid_bdev1", 00:15:44.684 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:44.684 "strip_size_kb": 64, 00:15:44.684 "state": "online", 00:15:44.684 "raid_level": "raid5f", 00:15:44.684 "superblock": true, 00:15:44.684 "num_base_bdevs": 3, 00:15:44.684 "num_base_bdevs_discovered": 2, 00:15:44.684 "num_base_bdevs_operational": 2, 00:15:44.684 "base_bdevs_list": [ 00:15:44.684 { 00:15:44.684 "name": null, 00:15:44.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.684 "is_configured": false, 00:15:44.684 "data_offset": 0, 00:15:44.684 "data_size": 63488 00:15:44.684 }, 00:15:44.684 { 00:15:44.684 "name": "BaseBdev2", 00:15:44.684 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:44.684 "is_configured": true, 00:15:44.684 "data_offset": 2048, 00:15:44.684 "data_size": 63488 00:15:44.684 }, 00:15:44.684 { 00:15:44.684 "name": "BaseBdev3", 00:15:44.684 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:44.684 "is_configured": true, 00:15:44.684 "data_offset": 2048, 00:15:44.684 "data_size": 63488 00:15:44.684 } 00:15:44.684 ] 00:15:44.684 }' 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.684 17:32:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.254 17:32:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:45.255 17:32:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.255 17:32:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.255 [2024-12-07 17:32:18.331217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.255 [2024-12-07 17:32:18.331479] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:45.255 [2024-12-07 17:32:18.331545] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:45.255 [2024-12-07 17:32:18.331615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.255 [2024-12-07 17:32:18.347182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:45.255 17:32:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.255 17:32:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:45.255 [2024-12-07 17:32:18.354628] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.194 "name": "raid_bdev1", 00:15:46.194 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:46.194 "strip_size_kb": 64, 00:15:46.194 "state": "online", 00:15:46.194 "raid_level": "raid5f", 00:15:46.194 "superblock": true, 00:15:46.194 "num_base_bdevs": 3, 00:15:46.194 "num_base_bdevs_discovered": 3, 00:15:46.194 "num_base_bdevs_operational": 3, 00:15:46.194 "process": { 00:15:46.194 "type": "rebuild", 00:15:46.194 "target": "spare", 00:15:46.194 "progress": { 00:15:46.194 "blocks": 20480, 00:15:46.194 "percent": 16 00:15:46.194 } 00:15:46.194 }, 00:15:46.194 "base_bdevs_list": [ 00:15:46.194 { 00:15:46.194 "name": "spare", 00:15:46.194 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:46.194 "is_configured": true, 00:15:46.194 "data_offset": 2048, 00:15:46.194 "data_size": 63488 00:15:46.194 }, 00:15:46.194 { 00:15:46.194 "name": "BaseBdev2", 00:15:46.194 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:46.194 "is_configured": true, 00:15:46.194 "data_offset": 2048, 00:15:46.194 "data_size": 63488 00:15:46.194 }, 00:15:46.194 { 00:15:46.194 "name": "BaseBdev3", 00:15:46.194 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:46.194 "is_configured": true, 00:15:46.194 "data_offset": 2048, 00:15:46.194 "data_size": 63488 00:15:46.194 } 00:15:46.194 ] 00:15:46.194 }' 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.194 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.195 [2024-12-07 17:32:19.465317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.195 [2024-12-07 17:32:19.562614] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:46.195 [2024-12-07 17:32:19.562674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.195 [2024-12-07 17:32:19.562690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.195 [2024-12-07 17:32:19.562699] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.454 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.454 "name": "raid_bdev1", 00:15:46.454 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:46.454 "strip_size_kb": 64, 00:15:46.454 "state": "online", 00:15:46.454 "raid_level": "raid5f", 00:15:46.454 "superblock": true, 00:15:46.454 "num_base_bdevs": 3, 00:15:46.454 "num_base_bdevs_discovered": 2, 00:15:46.454 "num_base_bdevs_operational": 2, 00:15:46.454 "base_bdevs_list": [ 00:15:46.455 { 00:15:46.455 "name": null, 00:15:46.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.455 "is_configured": false, 00:15:46.455 "data_offset": 0, 00:15:46.455 "data_size": 63488 00:15:46.455 }, 00:15:46.455 { 00:15:46.455 "name": "BaseBdev2", 00:15:46.455 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:46.455 "is_configured": true, 00:15:46.455 "data_offset": 2048, 00:15:46.455 "data_size": 63488 00:15:46.455 }, 00:15:46.455 { 00:15:46.455 "name": "BaseBdev3", 00:15:46.455 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:46.455 "is_configured": true, 00:15:46.455 "data_offset": 2048, 00:15:46.455 "data_size": 63488 00:15:46.455 } 00:15:46.455 ] 00:15:46.455 }' 00:15:46.455 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.455 17:32:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.714 17:32:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:46.714 17:32:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.714 17:32:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.715 [2024-12-07 17:32:20.077047] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:46.715 [2024-12-07 17:32:20.077162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.715 [2024-12-07 17:32:20.077201] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:46.715 [2024-12-07 17:32:20.077234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.715 [2024-12-07 17:32:20.077754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.715 [2024-12-07 17:32:20.077831] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:46.715 [2024-12-07 17:32:20.077971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:46.715 [2024-12-07 17:32:20.078022] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.715 [2024-12-07 17:32:20.078067] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:46.715 [2024-12-07 17:32:20.078147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.715 [2024-12-07 17:32:20.094044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:46.715 spare 00:15:46.974 17:32:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.974 17:32:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:46.974 [2024-12-07 17:32:20.101501] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.913 "name": "raid_bdev1", 00:15:47.913 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:47.913 "strip_size_kb": 64, 00:15:47.913 "state": "online", 00:15:47.913 "raid_level": "raid5f", 00:15:47.913 "superblock": true, 00:15:47.913 "num_base_bdevs": 3, 00:15:47.913 "num_base_bdevs_discovered": 3, 00:15:47.913 "num_base_bdevs_operational": 3, 00:15:47.913 "process": { 00:15:47.913 "type": "rebuild", 00:15:47.913 "target": "spare", 00:15:47.913 "progress": { 00:15:47.913 "blocks": 20480, 00:15:47.913 "percent": 16 00:15:47.913 } 00:15:47.913 }, 00:15:47.913 "base_bdevs_list": [ 00:15:47.913 { 00:15:47.913 "name": "spare", 00:15:47.913 "uuid": "e7df1b9a-7108-507d-bada-8ba85153c732", 00:15:47.913 "is_configured": true, 00:15:47.913 "data_offset": 2048, 00:15:47.913 "data_size": 63488 00:15:47.913 }, 00:15:47.913 { 00:15:47.913 "name": "BaseBdev2", 00:15:47.913 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:47.913 "is_configured": true, 00:15:47.913 "data_offset": 2048, 00:15:47.913 "data_size": 63488 00:15:47.913 }, 00:15:47.913 { 00:15:47.913 "name": "BaseBdev3", 00:15:47.913 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:47.913 "is_configured": true, 00:15:47.913 "data_offset": 2048, 00:15:47.913 "data_size": 63488 00:15:47.913 } 00:15:47.913 ] 00:15:47.913 }' 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.913 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.913 [2024-12-07 17:32:21.236335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.174 [2024-12-07 17:32:21.309786] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:48.174 [2024-12-07 17:32:21.309859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.174 [2024-12-07 17:32:21.309877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.174 [2024-12-07 17:32:21.309884] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.174 "name": "raid_bdev1", 00:15:48.174 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:48.174 "strip_size_kb": 64, 00:15:48.174 "state": "online", 00:15:48.174 "raid_level": "raid5f", 00:15:48.174 "superblock": true, 00:15:48.174 "num_base_bdevs": 3, 00:15:48.174 "num_base_bdevs_discovered": 2, 00:15:48.174 "num_base_bdevs_operational": 2, 00:15:48.174 "base_bdevs_list": [ 00:15:48.174 { 00:15:48.174 "name": null, 00:15:48.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.174 "is_configured": false, 00:15:48.174 "data_offset": 0, 00:15:48.174 "data_size": 63488 00:15:48.174 }, 00:15:48.174 { 00:15:48.174 "name": "BaseBdev2", 00:15:48.174 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:48.174 "is_configured": true, 00:15:48.174 "data_offset": 2048, 00:15:48.174 "data_size": 63488 00:15:48.174 }, 00:15:48.174 { 00:15:48.174 "name": "BaseBdev3", 00:15:48.174 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:48.174 "is_configured": true, 00:15:48.174 "data_offset": 2048, 00:15:48.174 "data_size": 63488 00:15:48.174 } 00:15:48.174 ] 00:15:48.174 }' 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.174 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.435 "name": "raid_bdev1", 00:15:48.435 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:48.435 "strip_size_kb": 64, 00:15:48.435 "state": "online", 00:15:48.435 "raid_level": "raid5f", 00:15:48.435 "superblock": true, 00:15:48.435 "num_base_bdevs": 3, 00:15:48.435 "num_base_bdevs_discovered": 2, 00:15:48.435 "num_base_bdevs_operational": 2, 00:15:48.435 "base_bdevs_list": [ 00:15:48.435 { 00:15:48.435 "name": null, 00:15:48.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.435 "is_configured": false, 00:15:48.435 "data_offset": 0, 00:15:48.435 "data_size": 63488 00:15:48.435 }, 00:15:48.435 { 00:15:48.435 "name": "BaseBdev2", 00:15:48.435 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:48.435 "is_configured": true, 00:15:48.435 "data_offset": 2048, 00:15:48.435 "data_size": 63488 00:15:48.435 }, 00:15:48.435 { 00:15:48.435 "name": "BaseBdev3", 00:15:48.435 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:48.435 "is_configured": true, 00:15:48.435 "data_offset": 2048, 00:15:48.435 "data_size": 63488 00:15:48.435 } 00:15:48.435 ] 00:15:48.435 }' 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.435 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.694 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.694 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:48.694 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.694 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.694 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.694 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:48.694 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.694 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.694 [2024-12-07 17:32:21.879902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:48.694 [2024-12-07 17:32:21.879970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.694 [2024-12-07 17:32:21.879996] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:48.694 [2024-12-07 17:32:21.880006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.694 [2024-12-07 17:32:21.880493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.694 [2024-12-07 17:32:21.880518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.694 [2024-12-07 17:32:21.880600] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:48.695 [2024-12-07 17:32:21.880613] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:48.695 [2024-12-07 17:32:21.880635] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:48.695 [2024-12-07 17:32:21.880645] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:48.695 BaseBdev1 00:15:48.695 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.695 17:32:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.634 "name": "raid_bdev1", 00:15:49.634 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:49.634 "strip_size_kb": 64, 00:15:49.634 "state": "online", 00:15:49.634 "raid_level": "raid5f", 00:15:49.634 "superblock": true, 00:15:49.634 "num_base_bdevs": 3, 00:15:49.634 "num_base_bdevs_discovered": 2, 00:15:49.634 "num_base_bdevs_operational": 2, 00:15:49.634 "base_bdevs_list": [ 00:15:49.634 { 00:15:49.634 "name": null, 00:15:49.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.634 "is_configured": false, 00:15:49.634 "data_offset": 0, 00:15:49.634 "data_size": 63488 00:15:49.634 }, 00:15:49.634 { 00:15:49.634 "name": "BaseBdev2", 00:15:49.634 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:49.634 "is_configured": true, 00:15:49.634 "data_offset": 2048, 00:15:49.634 "data_size": 63488 00:15:49.634 }, 00:15:49.634 { 00:15:49.634 "name": "BaseBdev3", 00:15:49.634 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:49.634 "is_configured": true, 00:15:49.634 "data_offset": 2048, 00:15:49.634 "data_size": 63488 00:15:49.634 } 00:15:49.634 ] 00:15:49.634 }' 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.634 17:32:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.204 "name": "raid_bdev1", 00:15:50.204 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:50.204 "strip_size_kb": 64, 00:15:50.204 "state": "online", 00:15:50.204 "raid_level": "raid5f", 00:15:50.204 "superblock": true, 00:15:50.204 "num_base_bdevs": 3, 00:15:50.204 "num_base_bdevs_discovered": 2, 00:15:50.204 "num_base_bdevs_operational": 2, 00:15:50.204 "base_bdevs_list": [ 00:15:50.204 { 00:15:50.204 "name": null, 00:15:50.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.204 "is_configured": false, 00:15:50.204 "data_offset": 0, 00:15:50.204 "data_size": 63488 00:15:50.204 }, 00:15:50.204 { 00:15:50.204 "name": "BaseBdev2", 00:15:50.204 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:50.204 "is_configured": true, 00:15:50.204 "data_offset": 2048, 00:15:50.204 "data_size": 63488 00:15:50.204 }, 00:15:50.204 { 00:15:50.204 "name": "BaseBdev3", 00:15:50.204 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:50.204 "is_configured": true, 00:15:50.204 "data_offset": 2048, 00:15:50.204 "data_size": 63488 00:15:50.204 } 00:15:50.204 ] 00:15:50.204 }' 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.204 [2024-12-07 17:32:23.493347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.204 [2024-12-07 17:32:23.493583] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.204 [2024-12-07 17:32:23.493603] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:50.204 request: 00:15:50.204 { 00:15:50.204 "base_bdev": "BaseBdev1", 00:15:50.204 "raid_bdev": "raid_bdev1", 00:15:50.204 "method": "bdev_raid_add_base_bdev", 00:15:50.204 "req_id": 1 00:15:50.204 } 00:15:50.204 Got JSON-RPC error response 00:15:50.204 response: 00:15:50.204 { 00:15:50.204 "code": -22, 00:15:50.204 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:50.204 } 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:50.204 17:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:51.144 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:51.144 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.144 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.144 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.144 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.144 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.145 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.145 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.145 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.145 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.145 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.145 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.145 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.145 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.404 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.404 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.404 "name": "raid_bdev1", 00:15:51.404 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:51.404 "strip_size_kb": 64, 00:15:51.404 "state": "online", 00:15:51.404 "raid_level": "raid5f", 00:15:51.404 "superblock": true, 00:15:51.404 "num_base_bdevs": 3, 00:15:51.404 "num_base_bdevs_discovered": 2, 00:15:51.404 "num_base_bdevs_operational": 2, 00:15:51.404 "base_bdevs_list": [ 00:15:51.404 { 00:15:51.404 "name": null, 00:15:51.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.404 "is_configured": false, 00:15:51.404 "data_offset": 0, 00:15:51.404 "data_size": 63488 00:15:51.404 }, 00:15:51.404 { 00:15:51.404 "name": "BaseBdev2", 00:15:51.404 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:51.404 "is_configured": true, 00:15:51.404 "data_offset": 2048, 00:15:51.404 "data_size": 63488 00:15:51.404 }, 00:15:51.404 { 00:15:51.404 "name": "BaseBdev3", 00:15:51.404 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:51.404 "is_configured": true, 00:15:51.404 "data_offset": 2048, 00:15:51.404 "data_size": 63488 00:15:51.404 } 00:15:51.404 ] 00:15:51.404 }' 00:15:51.404 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.404 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.664 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.664 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.664 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.664 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.664 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.664 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.664 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.664 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.664 17:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.664 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.664 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.664 "name": "raid_bdev1", 00:15:51.664 "uuid": "4af4702b-e399-46a7-ac56-22d99443136e", 00:15:51.664 "strip_size_kb": 64, 00:15:51.664 "state": "online", 00:15:51.664 "raid_level": "raid5f", 00:15:51.664 "superblock": true, 00:15:51.664 "num_base_bdevs": 3, 00:15:51.664 "num_base_bdevs_discovered": 2, 00:15:51.664 "num_base_bdevs_operational": 2, 00:15:51.664 "base_bdevs_list": [ 00:15:51.664 { 00:15:51.664 "name": null, 00:15:51.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.664 "is_configured": false, 00:15:51.664 "data_offset": 0, 00:15:51.664 "data_size": 63488 00:15:51.664 }, 00:15:51.664 { 00:15:51.664 "name": "BaseBdev2", 00:15:51.664 "uuid": "f71d7415-20f4-5a5f-b2db-68ddb2345516", 00:15:51.664 "is_configured": true, 00:15:51.664 "data_offset": 2048, 00:15:51.664 "data_size": 63488 00:15:51.664 }, 00:15:51.664 { 00:15:51.664 "name": "BaseBdev3", 00:15:51.664 "uuid": "9075a075-d7fb-507a-b2b9-57f57a1d881b", 00:15:51.664 "is_configured": true, 00:15:51.664 "data_offset": 2048, 00:15:51.664 "data_size": 63488 00:15:51.664 } 00:15:51.664 ] 00:15:51.664 }' 00:15:51.664 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82010 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82010 ']' 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82010 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82010 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.926 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.927 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82010' 00:15:51.927 killing process with pid 82010 00:15:51.927 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82010 00:15:51.927 Received shutdown signal, test time was about 60.000000 seconds 00:15:51.927 00:15:51.927 Latency(us) 00:15:51.927 [2024-12-07T17:32:25.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.927 [2024-12-07T17:32:25.309Z] =================================================================================================================== 00:15:51.927 [2024-12-07T17:32:25.309Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:51.927 [2024-12-07 17:32:25.176550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.927 [2024-12-07 17:32:25.176679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.927 17:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82010 00:15:51.927 [2024-12-07 17:32:25.176743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.927 [2024-12-07 17:32:25.176755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:52.187 [2024-12-07 17:32:25.564430] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.666 17:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:53.666 00:15:53.666 real 0m22.923s 00:15:53.666 user 0m29.214s 00:15:53.666 sys 0m2.718s 00:15:53.666 ************************************ 00:15:53.666 END TEST raid5f_rebuild_test_sb 00:15:53.666 ************************************ 00:15:53.666 17:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.666 17:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.666 17:32:26 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:53.666 17:32:26 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:53.666 17:32:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:53.666 17:32:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.666 17:32:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.666 ************************************ 00:15:53.666 START TEST raid5f_state_function_test 00:15:53.666 ************************************ 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:53.666 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82760 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82760' 00:15:53.667 Process raid pid: 82760 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82760 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82760 ']' 00:15:53.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.667 17:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.667 [2024-12-07 17:32:26.800221] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:53.667 [2024-12-07 17:32:26.800331] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.667 [2024-12-07 17:32:26.972749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.927 [2024-12-07 17:32:27.081791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.927 [2024-12-07 17:32:27.292469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.927 [2024-12-07 17:32:27.292499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.496 [2024-12-07 17:32:27.641814] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.496 [2024-12-07 17:32:27.641874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.496 [2024-12-07 17:32:27.641884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.496 [2024-12-07 17:32:27.641893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.496 [2024-12-07 17:32:27.641899] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.496 [2024-12-07 17:32:27.641908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.496 [2024-12-07 17:32:27.641914] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.496 [2024-12-07 17:32:27.641922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.496 "name": "Existed_Raid", 00:15:54.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.496 "strip_size_kb": 64, 00:15:54.496 "state": "configuring", 00:15:54.496 "raid_level": "raid5f", 00:15:54.496 "superblock": false, 00:15:54.496 "num_base_bdevs": 4, 00:15:54.496 "num_base_bdevs_discovered": 0, 00:15:54.496 "num_base_bdevs_operational": 4, 00:15:54.496 "base_bdevs_list": [ 00:15:54.496 { 00:15:54.496 "name": "BaseBdev1", 00:15:54.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.496 "is_configured": false, 00:15:54.496 "data_offset": 0, 00:15:54.496 "data_size": 0 00:15:54.496 }, 00:15:54.496 { 00:15:54.496 "name": "BaseBdev2", 00:15:54.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.496 "is_configured": false, 00:15:54.496 "data_offset": 0, 00:15:54.496 "data_size": 0 00:15:54.496 }, 00:15:54.496 { 00:15:54.496 "name": "BaseBdev3", 00:15:54.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.496 "is_configured": false, 00:15:54.496 "data_offset": 0, 00:15:54.496 "data_size": 0 00:15:54.496 }, 00:15:54.496 { 00:15:54.496 "name": "BaseBdev4", 00:15:54.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.496 "is_configured": false, 00:15:54.496 "data_offset": 0, 00:15:54.496 "data_size": 0 00:15:54.496 } 00:15:54.496 ] 00:15:54.496 }' 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.496 17:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.756 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:54.756 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.756 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.756 [2024-12-07 17:32:28.124918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.756 [2024-12-07 17:32:28.125032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:54.756 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.756 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:54.756 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.756 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.756 [2024-12-07 17:32:28.132917] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.756 [2024-12-07 17:32:28.133008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.756 [2024-12-07 17:32:28.133038] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.756 [2024-12-07 17:32:28.133061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.756 [2024-12-07 17:32:28.133080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.756 [2024-12-07 17:32:28.133103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.756 [2024-12-07 17:32:28.133138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.756 [2024-12-07 17:32:28.133177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.016 [2024-12-07 17:32:28.178565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.016 BaseBdev1 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.016 [ 00:15:55.016 { 00:15:55.016 "name": "BaseBdev1", 00:15:55.016 "aliases": [ 00:15:55.016 "1944f5a4-09d8-46a9-8a46-87bc687fd12f" 00:15:55.016 ], 00:15:55.016 "product_name": "Malloc disk", 00:15:55.016 "block_size": 512, 00:15:55.016 "num_blocks": 65536, 00:15:55.016 "uuid": "1944f5a4-09d8-46a9-8a46-87bc687fd12f", 00:15:55.016 "assigned_rate_limits": { 00:15:55.016 "rw_ios_per_sec": 0, 00:15:55.016 "rw_mbytes_per_sec": 0, 00:15:55.016 "r_mbytes_per_sec": 0, 00:15:55.016 "w_mbytes_per_sec": 0 00:15:55.016 }, 00:15:55.016 "claimed": true, 00:15:55.016 "claim_type": "exclusive_write", 00:15:55.016 "zoned": false, 00:15:55.016 "supported_io_types": { 00:15:55.016 "read": true, 00:15:55.016 "write": true, 00:15:55.016 "unmap": true, 00:15:55.016 "flush": true, 00:15:55.016 "reset": true, 00:15:55.016 "nvme_admin": false, 00:15:55.016 "nvme_io": false, 00:15:55.016 "nvme_io_md": false, 00:15:55.016 "write_zeroes": true, 00:15:55.016 "zcopy": true, 00:15:55.016 "get_zone_info": false, 00:15:55.016 "zone_management": false, 00:15:55.016 "zone_append": false, 00:15:55.016 "compare": false, 00:15:55.016 "compare_and_write": false, 00:15:55.016 "abort": true, 00:15:55.016 "seek_hole": false, 00:15:55.016 "seek_data": false, 00:15:55.016 "copy": true, 00:15:55.016 "nvme_iov_md": false 00:15:55.016 }, 00:15:55.016 "memory_domains": [ 00:15:55.016 { 00:15:55.016 "dma_device_id": "system", 00:15:55.016 "dma_device_type": 1 00:15:55.016 }, 00:15:55.016 { 00:15:55.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.016 "dma_device_type": 2 00:15:55.016 } 00:15:55.016 ], 00:15:55.016 "driver_specific": {} 00:15:55.016 } 00:15:55.016 ] 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.016 "name": "Existed_Raid", 00:15:55.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.016 "strip_size_kb": 64, 00:15:55.016 "state": "configuring", 00:15:55.016 "raid_level": "raid5f", 00:15:55.016 "superblock": false, 00:15:55.016 "num_base_bdevs": 4, 00:15:55.016 "num_base_bdevs_discovered": 1, 00:15:55.016 "num_base_bdevs_operational": 4, 00:15:55.016 "base_bdevs_list": [ 00:15:55.016 { 00:15:55.016 "name": "BaseBdev1", 00:15:55.016 "uuid": "1944f5a4-09d8-46a9-8a46-87bc687fd12f", 00:15:55.016 "is_configured": true, 00:15:55.016 "data_offset": 0, 00:15:55.016 "data_size": 65536 00:15:55.016 }, 00:15:55.016 { 00:15:55.016 "name": "BaseBdev2", 00:15:55.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.016 "is_configured": false, 00:15:55.016 "data_offset": 0, 00:15:55.016 "data_size": 0 00:15:55.016 }, 00:15:55.016 { 00:15:55.016 "name": "BaseBdev3", 00:15:55.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.016 "is_configured": false, 00:15:55.016 "data_offset": 0, 00:15:55.016 "data_size": 0 00:15:55.016 }, 00:15:55.016 { 00:15:55.016 "name": "BaseBdev4", 00:15:55.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.016 "is_configured": false, 00:15:55.016 "data_offset": 0, 00:15:55.016 "data_size": 0 00:15:55.016 } 00:15:55.016 ] 00:15:55.016 }' 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.016 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.275 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.276 [2024-12-07 17:32:28.621865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.276 [2024-12-07 17:32:28.621979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.276 [2024-12-07 17:32:28.633899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.276 [2024-12-07 17:32:28.635725] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.276 [2024-12-07 17:32:28.635805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.276 [2024-12-07 17:32:28.635836] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.276 [2024-12-07 17:32:28.635864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.276 [2024-12-07 17:32:28.635913] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:55.276 [2024-12-07 17:32:28.635949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.276 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.534 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.534 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.534 "name": "Existed_Raid", 00:15:55.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.534 "strip_size_kb": 64, 00:15:55.534 "state": "configuring", 00:15:55.534 "raid_level": "raid5f", 00:15:55.534 "superblock": false, 00:15:55.534 "num_base_bdevs": 4, 00:15:55.534 "num_base_bdevs_discovered": 1, 00:15:55.534 "num_base_bdevs_operational": 4, 00:15:55.534 "base_bdevs_list": [ 00:15:55.534 { 00:15:55.534 "name": "BaseBdev1", 00:15:55.534 "uuid": "1944f5a4-09d8-46a9-8a46-87bc687fd12f", 00:15:55.534 "is_configured": true, 00:15:55.534 "data_offset": 0, 00:15:55.534 "data_size": 65536 00:15:55.534 }, 00:15:55.534 { 00:15:55.534 "name": "BaseBdev2", 00:15:55.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.534 "is_configured": false, 00:15:55.534 "data_offset": 0, 00:15:55.534 "data_size": 0 00:15:55.534 }, 00:15:55.534 { 00:15:55.534 "name": "BaseBdev3", 00:15:55.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.534 "is_configured": false, 00:15:55.534 "data_offset": 0, 00:15:55.534 "data_size": 0 00:15:55.534 }, 00:15:55.534 { 00:15:55.534 "name": "BaseBdev4", 00:15:55.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.534 "is_configured": false, 00:15:55.534 "data_offset": 0, 00:15:55.534 "data_size": 0 00:15:55.534 } 00:15:55.534 ] 00:15:55.534 }' 00:15:55.534 17:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.534 17:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.793 [2024-12-07 17:32:29.127235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.793 BaseBdev2 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.793 [ 00:15:55.793 { 00:15:55.793 "name": "BaseBdev2", 00:15:55.793 "aliases": [ 00:15:55.793 "3ac4a3e6-1082-4fcf-8ae1-d60c57d141a3" 00:15:55.793 ], 00:15:55.793 "product_name": "Malloc disk", 00:15:55.793 "block_size": 512, 00:15:55.793 "num_blocks": 65536, 00:15:55.793 "uuid": "3ac4a3e6-1082-4fcf-8ae1-d60c57d141a3", 00:15:55.793 "assigned_rate_limits": { 00:15:55.793 "rw_ios_per_sec": 0, 00:15:55.793 "rw_mbytes_per_sec": 0, 00:15:55.793 "r_mbytes_per_sec": 0, 00:15:55.793 "w_mbytes_per_sec": 0 00:15:55.793 }, 00:15:55.793 "claimed": true, 00:15:55.793 "claim_type": "exclusive_write", 00:15:55.793 "zoned": false, 00:15:55.793 "supported_io_types": { 00:15:55.793 "read": true, 00:15:55.793 "write": true, 00:15:55.793 "unmap": true, 00:15:55.793 "flush": true, 00:15:55.793 "reset": true, 00:15:55.793 "nvme_admin": false, 00:15:55.793 "nvme_io": false, 00:15:55.793 "nvme_io_md": false, 00:15:55.793 "write_zeroes": true, 00:15:55.793 "zcopy": true, 00:15:55.793 "get_zone_info": false, 00:15:55.793 "zone_management": false, 00:15:55.793 "zone_append": false, 00:15:55.793 "compare": false, 00:15:55.793 "compare_and_write": false, 00:15:55.793 "abort": true, 00:15:55.793 "seek_hole": false, 00:15:55.793 "seek_data": false, 00:15:55.793 "copy": true, 00:15:55.793 "nvme_iov_md": false 00:15:55.793 }, 00:15:55.793 "memory_domains": [ 00:15:55.793 { 00:15:55.793 "dma_device_id": "system", 00:15:55.793 "dma_device_type": 1 00:15:55.793 }, 00:15:55.793 { 00:15:55.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.793 "dma_device_type": 2 00:15:55.793 } 00:15:55.793 ], 00:15:55.793 "driver_specific": {} 00:15:55.793 } 00:15:55.793 ] 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.793 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.052 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.052 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.052 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.052 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.052 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.052 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.052 "name": "Existed_Raid", 00:15:56.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.052 "strip_size_kb": 64, 00:15:56.052 "state": "configuring", 00:15:56.052 "raid_level": "raid5f", 00:15:56.052 "superblock": false, 00:15:56.052 "num_base_bdevs": 4, 00:15:56.052 "num_base_bdevs_discovered": 2, 00:15:56.052 "num_base_bdevs_operational": 4, 00:15:56.052 "base_bdevs_list": [ 00:15:56.052 { 00:15:56.052 "name": "BaseBdev1", 00:15:56.053 "uuid": "1944f5a4-09d8-46a9-8a46-87bc687fd12f", 00:15:56.053 "is_configured": true, 00:15:56.053 "data_offset": 0, 00:15:56.053 "data_size": 65536 00:15:56.053 }, 00:15:56.053 { 00:15:56.053 "name": "BaseBdev2", 00:15:56.053 "uuid": "3ac4a3e6-1082-4fcf-8ae1-d60c57d141a3", 00:15:56.053 "is_configured": true, 00:15:56.053 "data_offset": 0, 00:15:56.053 "data_size": 65536 00:15:56.053 }, 00:15:56.053 { 00:15:56.053 "name": "BaseBdev3", 00:15:56.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.053 "is_configured": false, 00:15:56.053 "data_offset": 0, 00:15:56.053 "data_size": 0 00:15:56.053 }, 00:15:56.053 { 00:15:56.053 "name": "BaseBdev4", 00:15:56.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.053 "is_configured": false, 00:15:56.053 "data_offset": 0, 00:15:56.053 "data_size": 0 00:15:56.053 } 00:15:56.053 ] 00:15:56.053 }' 00:15:56.053 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.053 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.312 [2024-12-07 17:32:29.601317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.312 BaseBdev3 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.312 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.312 [ 00:15:56.312 { 00:15:56.312 "name": "BaseBdev3", 00:15:56.312 "aliases": [ 00:15:56.312 "2750eef9-2d59-4e0b-8417-89b21a369117" 00:15:56.312 ], 00:15:56.312 "product_name": "Malloc disk", 00:15:56.312 "block_size": 512, 00:15:56.312 "num_blocks": 65536, 00:15:56.312 "uuid": "2750eef9-2d59-4e0b-8417-89b21a369117", 00:15:56.312 "assigned_rate_limits": { 00:15:56.312 "rw_ios_per_sec": 0, 00:15:56.312 "rw_mbytes_per_sec": 0, 00:15:56.312 "r_mbytes_per_sec": 0, 00:15:56.312 "w_mbytes_per_sec": 0 00:15:56.312 }, 00:15:56.312 "claimed": true, 00:15:56.312 "claim_type": "exclusive_write", 00:15:56.312 "zoned": false, 00:15:56.312 "supported_io_types": { 00:15:56.312 "read": true, 00:15:56.312 "write": true, 00:15:56.312 "unmap": true, 00:15:56.312 "flush": true, 00:15:56.312 "reset": true, 00:15:56.312 "nvme_admin": false, 00:15:56.312 "nvme_io": false, 00:15:56.312 "nvme_io_md": false, 00:15:56.312 "write_zeroes": true, 00:15:56.312 "zcopy": true, 00:15:56.312 "get_zone_info": false, 00:15:56.312 "zone_management": false, 00:15:56.312 "zone_append": false, 00:15:56.312 "compare": false, 00:15:56.312 "compare_and_write": false, 00:15:56.312 "abort": true, 00:15:56.312 "seek_hole": false, 00:15:56.312 "seek_data": false, 00:15:56.312 "copy": true, 00:15:56.312 "nvme_iov_md": false 00:15:56.312 }, 00:15:56.312 "memory_domains": [ 00:15:56.312 { 00:15:56.312 "dma_device_id": "system", 00:15:56.312 "dma_device_type": 1 00:15:56.312 }, 00:15:56.312 { 00:15:56.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.312 "dma_device_type": 2 00:15:56.312 } 00:15:56.312 ], 00:15:56.312 "driver_specific": {} 00:15:56.312 } 00:15:56.313 ] 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.313 "name": "Existed_Raid", 00:15:56.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.313 "strip_size_kb": 64, 00:15:56.313 "state": "configuring", 00:15:56.313 "raid_level": "raid5f", 00:15:56.313 "superblock": false, 00:15:56.313 "num_base_bdevs": 4, 00:15:56.313 "num_base_bdevs_discovered": 3, 00:15:56.313 "num_base_bdevs_operational": 4, 00:15:56.313 "base_bdevs_list": [ 00:15:56.313 { 00:15:56.313 "name": "BaseBdev1", 00:15:56.313 "uuid": "1944f5a4-09d8-46a9-8a46-87bc687fd12f", 00:15:56.313 "is_configured": true, 00:15:56.313 "data_offset": 0, 00:15:56.313 "data_size": 65536 00:15:56.313 }, 00:15:56.313 { 00:15:56.313 "name": "BaseBdev2", 00:15:56.313 "uuid": "3ac4a3e6-1082-4fcf-8ae1-d60c57d141a3", 00:15:56.313 "is_configured": true, 00:15:56.313 "data_offset": 0, 00:15:56.313 "data_size": 65536 00:15:56.313 }, 00:15:56.313 { 00:15:56.313 "name": "BaseBdev3", 00:15:56.313 "uuid": "2750eef9-2d59-4e0b-8417-89b21a369117", 00:15:56.313 "is_configured": true, 00:15:56.313 "data_offset": 0, 00:15:56.313 "data_size": 65536 00:15:56.313 }, 00:15:56.313 { 00:15:56.313 "name": "BaseBdev4", 00:15:56.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.313 "is_configured": false, 00:15:56.313 "data_offset": 0, 00:15:56.313 "data_size": 0 00:15:56.313 } 00:15:56.313 ] 00:15:56.313 }' 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.313 17:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.880 [2024-12-07 17:32:30.126287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.880 [2024-12-07 17:32:30.126429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:56.880 [2024-12-07 17:32:30.126445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:56.880 [2024-12-07 17:32:30.126724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:56.880 [2024-12-07 17:32:30.133923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:56.880 [2024-12-07 17:32:30.133953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:56.880 [2024-12-07 17:32:30.134200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.880 BaseBdev4 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.880 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.880 [ 00:15:56.880 { 00:15:56.880 "name": "BaseBdev4", 00:15:56.880 "aliases": [ 00:15:56.880 "cfc79474-8043-4e47-88e2-e26881037b6f" 00:15:56.880 ], 00:15:56.880 "product_name": "Malloc disk", 00:15:56.880 "block_size": 512, 00:15:56.880 "num_blocks": 65536, 00:15:56.880 "uuid": "cfc79474-8043-4e47-88e2-e26881037b6f", 00:15:56.880 "assigned_rate_limits": { 00:15:56.880 "rw_ios_per_sec": 0, 00:15:56.880 "rw_mbytes_per_sec": 0, 00:15:56.880 "r_mbytes_per_sec": 0, 00:15:56.880 "w_mbytes_per_sec": 0 00:15:56.880 }, 00:15:56.880 "claimed": true, 00:15:56.880 "claim_type": "exclusive_write", 00:15:56.880 "zoned": false, 00:15:56.880 "supported_io_types": { 00:15:56.880 "read": true, 00:15:56.880 "write": true, 00:15:56.880 "unmap": true, 00:15:56.880 "flush": true, 00:15:56.880 "reset": true, 00:15:56.880 "nvme_admin": false, 00:15:56.880 "nvme_io": false, 00:15:56.881 "nvme_io_md": false, 00:15:56.881 "write_zeroes": true, 00:15:56.881 "zcopy": true, 00:15:56.881 "get_zone_info": false, 00:15:56.881 "zone_management": false, 00:15:56.881 "zone_append": false, 00:15:56.881 "compare": false, 00:15:56.881 "compare_and_write": false, 00:15:56.881 "abort": true, 00:15:56.881 "seek_hole": false, 00:15:56.881 "seek_data": false, 00:15:56.881 "copy": true, 00:15:56.881 "nvme_iov_md": false 00:15:56.881 }, 00:15:56.881 "memory_domains": [ 00:15:56.881 { 00:15:56.881 "dma_device_id": "system", 00:15:56.881 "dma_device_type": 1 00:15:56.881 }, 00:15:56.881 { 00:15:56.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.881 "dma_device_type": 2 00:15:56.881 } 00:15:56.881 ], 00:15:56.881 "driver_specific": {} 00:15:56.881 } 00:15:56.881 ] 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.881 "name": "Existed_Raid", 00:15:56.881 "uuid": "4dcc2642-a629-4ffc-8b2c-60eace4ab011", 00:15:56.881 "strip_size_kb": 64, 00:15:56.881 "state": "online", 00:15:56.881 "raid_level": "raid5f", 00:15:56.881 "superblock": false, 00:15:56.881 "num_base_bdevs": 4, 00:15:56.881 "num_base_bdevs_discovered": 4, 00:15:56.881 "num_base_bdevs_operational": 4, 00:15:56.881 "base_bdevs_list": [ 00:15:56.881 { 00:15:56.881 "name": "BaseBdev1", 00:15:56.881 "uuid": "1944f5a4-09d8-46a9-8a46-87bc687fd12f", 00:15:56.881 "is_configured": true, 00:15:56.881 "data_offset": 0, 00:15:56.881 "data_size": 65536 00:15:56.881 }, 00:15:56.881 { 00:15:56.881 "name": "BaseBdev2", 00:15:56.881 "uuid": "3ac4a3e6-1082-4fcf-8ae1-d60c57d141a3", 00:15:56.881 "is_configured": true, 00:15:56.881 "data_offset": 0, 00:15:56.881 "data_size": 65536 00:15:56.881 }, 00:15:56.881 { 00:15:56.881 "name": "BaseBdev3", 00:15:56.881 "uuid": "2750eef9-2d59-4e0b-8417-89b21a369117", 00:15:56.881 "is_configured": true, 00:15:56.881 "data_offset": 0, 00:15:56.881 "data_size": 65536 00:15:56.881 }, 00:15:56.881 { 00:15:56.881 "name": "BaseBdev4", 00:15:56.881 "uuid": "cfc79474-8043-4e47-88e2-e26881037b6f", 00:15:56.881 "is_configured": true, 00:15:56.881 "data_offset": 0, 00:15:56.881 "data_size": 65536 00:15:56.881 } 00:15:56.881 ] 00:15:56.881 }' 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.881 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.451 [2024-12-07 17:32:30.613479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:57.451 "name": "Existed_Raid", 00:15:57.451 "aliases": [ 00:15:57.451 "4dcc2642-a629-4ffc-8b2c-60eace4ab011" 00:15:57.451 ], 00:15:57.451 "product_name": "Raid Volume", 00:15:57.451 "block_size": 512, 00:15:57.451 "num_blocks": 196608, 00:15:57.451 "uuid": "4dcc2642-a629-4ffc-8b2c-60eace4ab011", 00:15:57.451 "assigned_rate_limits": { 00:15:57.451 "rw_ios_per_sec": 0, 00:15:57.451 "rw_mbytes_per_sec": 0, 00:15:57.451 "r_mbytes_per_sec": 0, 00:15:57.451 "w_mbytes_per_sec": 0 00:15:57.451 }, 00:15:57.451 "claimed": false, 00:15:57.451 "zoned": false, 00:15:57.451 "supported_io_types": { 00:15:57.451 "read": true, 00:15:57.451 "write": true, 00:15:57.451 "unmap": false, 00:15:57.451 "flush": false, 00:15:57.451 "reset": true, 00:15:57.451 "nvme_admin": false, 00:15:57.451 "nvme_io": false, 00:15:57.451 "nvme_io_md": false, 00:15:57.451 "write_zeroes": true, 00:15:57.451 "zcopy": false, 00:15:57.451 "get_zone_info": false, 00:15:57.451 "zone_management": false, 00:15:57.451 "zone_append": false, 00:15:57.451 "compare": false, 00:15:57.451 "compare_and_write": false, 00:15:57.451 "abort": false, 00:15:57.451 "seek_hole": false, 00:15:57.451 "seek_data": false, 00:15:57.451 "copy": false, 00:15:57.451 "nvme_iov_md": false 00:15:57.451 }, 00:15:57.451 "driver_specific": { 00:15:57.451 "raid": { 00:15:57.451 "uuid": "4dcc2642-a629-4ffc-8b2c-60eace4ab011", 00:15:57.451 "strip_size_kb": 64, 00:15:57.451 "state": "online", 00:15:57.451 "raid_level": "raid5f", 00:15:57.451 "superblock": false, 00:15:57.451 "num_base_bdevs": 4, 00:15:57.451 "num_base_bdevs_discovered": 4, 00:15:57.451 "num_base_bdevs_operational": 4, 00:15:57.451 "base_bdevs_list": [ 00:15:57.451 { 00:15:57.451 "name": "BaseBdev1", 00:15:57.451 "uuid": "1944f5a4-09d8-46a9-8a46-87bc687fd12f", 00:15:57.451 "is_configured": true, 00:15:57.451 "data_offset": 0, 00:15:57.451 "data_size": 65536 00:15:57.451 }, 00:15:57.451 { 00:15:57.451 "name": "BaseBdev2", 00:15:57.451 "uuid": "3ac4a3e6-1082-4fcf-8ae1-d60c57d141a3", 00:15:57.451 "is_configured": true, 00:15:57.451 "data_offset": 0, 00:15:57.451 "data_size": 65536 00:15:57.451 }, 00:15:57.451 { 00:15:57.451 "name": "BaseBdev3", 00:15:57.451 "uuid": "2750eef9-2d59-4e0b-8417-89b21a369117", 00:15:57.451 "is_configured": true, 00:15:57.451 "data_offset": 0, 00:15:57.451 "data_size": 65536 00:15:57.451 }, 00:15:57.451 { 00:15:57.451 "name": "BaseBdev4", 00:15:57.451 "uuid": "cfc79474-8043-4e47-88e2-e26881037b6f", 00:15:57.451 "is_configured": true, 00:15:57.451 "data_offset": 0, 00:15:57.451 "data_size": 65536 00:15:57.451 } 00:15:57.451 ] 00:15:57.451 } 00:15:57.451 } 00:15:57.451 }' 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:57.451 BaseBdev2 00:15:57.451 BaseBdev3 00:15:57.451 BaseBdev4' 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.451 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.712 17:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.712 [2024-12-07 17:32:30.964763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.712 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.972 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.972 "name": "Existed_Raid", 00:15:57.972 "uuid": "4dcc2642-a629-4ffc-8b2c-60eace4ab011", 00:15:57.972 "strip_size_kb": 64, 00:15:57.972 "state": "online", 00:15:57.972 "raid_level": "raid5f", 00:15:57.972 "superblock": false, 00:15:57.972 "num_base_bdevs": 4, 00:15:57.972 "num_base_bdevs_discovered": 3, 00:15:57.972 "num_base_bdevs_operational": 3, 00:15:57.972 "base_bdevs_list": [ 00:15:57.972 { 00:15:57.972 "name": null, 00:15:57.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.972 "is_configured": false, 00:15:57.972 "data_offset": 0, 00:15:57.972 "data_size": 65536 00:15:57.972 }, 00:15:57.972 { 00:15:57.972 "name": "BaseBdev2", 00:15:57.972 "uuid": "3ac4a3e6-1082-4fcf-8ae1-d60c57d141a3", 00:15:57.972 "is_configured": true, 00:15:57.972 "data_offset": 0, 00:15:57.972 "data_size": 65536 00:15:57.972 }, 00:15:57.972 { 00:15:57.972 "name": "BaseBdev3", 00:15:57.972 "uuid": "2750eef9-2d59-4e0b-8417-89b21a369117", 00:15:57.972 "is_configured": true, 00:15:57.972 "data_offset": 0, 00:15:57.972 "data_size": 65536 00:15:57.972 }, 00:15:57.972 { 00:15:57.972 "name": "BaseBdev4", 00:15:57.972 "uuid": "cfc79474-8043-4e47-88e2-e26881037b6f", 00:15:57.972 "is_configured": true, 00:15:57.972 "data_offset": 0, 00:15:57.972 "data_size": 65536 00:15:57.972 } 00:15:57.972 ] 00:15:57.972 }' 00:15:57.973 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.973 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.231 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:58.231 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.231 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.231 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:58.231 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.231 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.231 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.231 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:58.231 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.232 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:58.232 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.232 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.232 [2024-12-07 17:32:31.562089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:58.232 [2024-12-07 17:32:31.562242] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.491 [2024-12-07 17:32:31.656013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.491 [2024-12-07 17:32:31.711943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.491 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.491 [2024-12-07 17:32:31.863669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:58.492 [2024-12-07 17:32:31.863763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:58.751 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.752 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:58.752 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.752 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.752 17:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:58.752 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.752 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.752 17:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.752 BaseBdev2 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.752 [ 00:15:58.752 { 00:15:58.752 "name": "BaseBdev2", 00:15:58.752 "aliases": [ 00:15:58.752 "e8d759e1-de50-419f-ae9d-8ea05d72c679" 00:15:58.752 ], 00:15:58.752 "product_name": "Malloc disk", 00:15:58.752 "block_size": 512, 00:15:58.752 "num_blocks": 65536, 00:15:58.752 "uuid": "e8d759e1-de50-419f-ae9d-8ea05d72c679", 00:15:58.752 "assigned_rate_limits": { 00:15:58.752 "rw_ios_per_sec": 0, 00:15:58.752 "rw_mbytes_per_sec": 0, 00:15:58.752 "r_mbytes_per_sec": 0, 00:15:58.752 "w_mbytes_per_sec": 0 00:15:58.752 }, 00:15:58.752 "claimed": false, 00:15:58.752 "zoned": false, 00:15:58.752 "supported_io_types": { 00:15:58.752 "read": true, 00:15:58.752 "write": true, 00:15:58.752 "unmap": true, 00:15:58.752 "flush": true, 00:15:58.752 "reset": true, 00:15:58.752 "nvme_admin": false, 00:15:58.752 "nvme_io": false, 00:15:58.752 "nvme_io_md": false, 00:15:58.752 "write_zeroes": true, 00:15:58.752 "zcopy": true, 00:15:58.752 "get_zone_info": false, 00:15:58.752 "zone_management": false, 00:15:58.752 "zone_append": false, 00:15:58.752 "compare": false, 00:15:58.752 "compare_and_write": false, 00:15:58.752 "abort": true, 00:15:58.752 "seek_hole": false, 00:15:58.752 "seek_data": false, 00:15:58.752 "copy": true, 00:15:58.752 "nvme_iov_md": false 00:15:58.752 }, 00:15:58.752 "memory_domains": [ 00:15:58.752 { 00:15:58.752 "dma_device_id": "system", 00:15:58.752 "dma_device_type": 1 00:15:58.752 }, 00:15:58.752 { 00:15:58.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.752 "dma_device_type": 2 00:15:58.752 } 00:15:58.752 ], 00:15:58.752 "driver_specific": {} 00:15:58.752 } 00:15:58.752 ] 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.752 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.752 BaseBdev3 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.013 [ 00:15:59.013 { 00:15:59.013 "name": "BaseBdev3", 00:15:59.013 "aliases": [ 00:15:59.013 "0367fb25-53fa-4fb6-ae86-d2000528eed1" 00:15:59.013 ], 00:15:59.013 "product_name": "Malloc disk", 00:15:59.013 "block_size": 512, 00:15:59.013 "num_blocks": 65536, 00:15:59.013 "uuid": "0367fb25-53fa-4fb6-ae86-d2000528eed1", 00:15:59.013 "assigned_rate_limits": { 00:15:59.013 "rw_ios_per_sec": 0, 00:15:59.013 "rw_mbytes_per_sec": 0, 00:15:59.013 "r_mbytes_per_sec": 0, 00:15:59.013 "w_mbytes_per_sec": 0 00:15:59.013 }, 00:15:59.013 "claimed": false, 00:15:59.013 "zoned": false, 00:15:59.013 "supported_io_types": { 00:15:59.013 "read": true, 00:15:59.013 "write": true, 00:15:59.013 "unmap": true, 00:15:59.013 "flush": true, 00:15:59.013 "reset": true, 00:15:59.013 "nvme_admin": false, 00:15:59.013 "nvme_io": false, 00:15:59.013 "nvme_io_md": false, 00:15:59.013 "write_zeroes": true, 00:15:59.013 "zcopy": true, 00:15:59.013 "get_zone_info": false, 00:15:59.013 "zone_management": false, 00:15:59.013 "zone_append": false, 00:15:59.013 "compare": false, 00:15:59.013 "compare_and_write": false, 00:15:59.013 "abort": true, 00:15:59.013 "seek_hole": false, 00:15:59.013 "seek_data": false, 00:15:59.013 "copy": true, 00:15:59.013 "nvme_iov_md": false 00:15:59.013 }, 00:15:59.013 "memory_domains": [ 00:15:59.013 { 00:15:59.013 "dma_device_id": "system", 00:15:59.013 "dma_device_type": 1 00:15:59.013 }, 00:15:59.013 { 00:15:59.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.013 "dma_device_type": 2 00:15:59.013 } 00:15:59.013 ], 00:15:59.013 "driver_specific": {} 00:15:59.013 } 00:15:59.013 ] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.013 BaseBdev4 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.013 [ 00:15:59.013 { 00:15:59.013 "name": "BaseBdev4", 00:15:59.013 "aliases": [ 00:15:59.013 "612149ce-e1da-4a2e-b5ab-53b2c3289086" 00:15:59.013 ], 00:15:59.013 "product_name": "Malloc disk", 00:15:59.013 "block_size": 512, 00:15:59.013 "num_blocks": 65536, 00:15:59.013 "uuid": "612149ce-e1da-4a2e-b5ab-53b2c3289086", 00:15:59.013 "assigned_rate_limits": { 00:15:59.013 "rw_ios_per_sec": 0, 00:15:59.013 "rw_mbytes_per_sec": 0, 00:15:59.013 "r_mbytes_per_sec": 0, 00:15:59.013 "w_mbytes_per_sec": 0 00:15:59.013 }, 00:15:59.013 "claimed": false, 00:15:59.013 "zoned": false, 00:15:59.013 "supported_io_types": { 00:15:59.013 "read": true, 00:15:59.013 "write": true, 00:15:59.013 "unmap": true, 00:15:59.013 "flush": true, 00:15:59.013 "reset": true, 00:15:59.013 "nvme_admin": false, 00:15:59.013 "nvme_io": false, 00:15:59.013 "nvme_io_md": false, 00:15:59.013 "write_zeroes": true, 00:15:59.013 "zcopy": true, 00:15:59.013 "get_zone_info": false, 00:15:59.013 "zone_management": false, 00:15:59.013 "zone_append": false, 00:15:59.013 "compare": false, 00:15:59.013 "compare_and_write": false, 00:15:59.013 "abort": true, 00:15:59.013 "seek_hole": false, 00:15:59.013 "seek_data": false, 00:15:59.013 "copy": true, 00:15:59.013 "nvme_iov_md": false 00:15:59.013 }, 00:15:59.013 "memory_domains": [ 00:15:59.013 { 00:15:59.013 "dma_device_id": "system", 00:15:59.013 "dma_device_type": 1 00:15:59.013 }, 00:15:59.013 { 00:15:59.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.013 "dma_device_type": 2 00:15:59.013 } 00:15:59.013 ], 00:15:59.013 "driver_specific": {} 00:15:59.013 } 00:15:59.013 ] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.013 [2024-12-07 17:32:32.258882] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.013 [2024-12-07 17:32:32.259006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.013 [2024-12-07 17:32:32.259055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.013 [2024-12-07 17:32:32.261018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.013 [2024-12-07 17:32:32.261113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.013 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.014 "name": "Existed_Raid", 00:15:59.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.014 "strip_size_kb": 64, 00:15:59.014 "state": "configuring", 00:15:59.014 "raid_level": "raid5f", 00:15:59.014 "superblock": false, 00:15:59.014 "num_base_bdevs": 4, 00:15:59.014 "num_base_bdevs_discovered": 3, 00:15:59.014 "num_base_bdevs_operational": 4, 00:15:59.014 "base_bdevs_list": [ 00:15:59.014 { 00:15:59.014 "name": "BaseBdev1", 00:15:59.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.014 "is_configured": false, 00:15:59.014 "data_offset": 0, 00:15:59.014 "data_size": 0 00:15:59.014 }, 00:15:59.014 { 00:15:59.014 "name": "BaseBdev2", 00:15:59.014 "uuid": "e8d759e1-de50-419f-ae9d-8ea05d72c679", 00:15:59.014 "is_configured": true, 00:15:59.014 "data_offset": 0, 00:15:59.014 "data_size": 65536 00:15:59.014 }, 00:15:59.014 { 00:15:59.014 "name": "BaseBdev3", 00:15:59.014 "uuid": "0367fb25-53fa-4fb6-ae86-d2000528eed1", 00:15:59.014 "is_configured": true, 00:15:59.014 "data_offset": 0, 00:15:59.014 "data_size": 65536 00:15:59.014 }, 00:15:59.014 { 00:15:59.014 "name": "BaseBdev4", 00:15:59.014 "uuid": "612149ce-e1da-4a2e-b5ab-53b2c3289086", 00:15:59.014 "is_configured": true, 00:15:59.014 "data_offset": 0, 00:15:59.014 "data_size": 65536 00:15:59.014 } 00:15:59.014 ] 00:15:59.014 }' 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.014 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.584 [2024-12-07 17:32:32.710081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.584 "name": "Existed_Raid", 00:15:59.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.584 "strip_size_kb": 64, 00:15:59.584 "state": "configuring", 00:15:59.584 "raid_level": "raid5f", 00:15:59.584 "superblock": false, 00:15:59.584 "num_base_bdevs": 4, 00:15:59.584 "num_base_bdevs_discovered": 2, 00:15:59.584 "num_base_bdevs_operational": 4, 00:15:59.584 "base_bdevs_list": [ 00:15:59.584 { 00:15:59.584 "name": "BaseBdev1", 00:15:59.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.584 "is_configured": false, 00:15:59.584 "data_offset": 0, 00:15:59.584 "data_size": 0 00:15:59.584 }, 00:15:59.584 { 00:15:59.584 "name": null, 00:15:59.584 "uuid": "e8d759e1-de50-419f-ae9d-8ea05d72c679", 00:15:59.584 "is_configured": false, 00:15:59.584 "data_offset": 0, 00:15:59.584 "data_size": 65536 00:15:59.584 }, 00:15:59.584 { 00:15:59.584 "name": "BaseBdev3", 00:15:59.584 "uuid": "0367fb25-53fa-4fb6-ae86-d2000528eed1", 00:15:59.584 "is_configured": true, 00:15:59.584 "data_offset": 0, 00:15:59.584 "data_size": 65536 00:15:59.584 }, 00:15:59.584 { 00:15:59.584 "name": "BaseBdev4", 00:15:59.584 "uuid": "612149ce-e1da-4a2e-b5ab-53b2c3289086", 00:15:59.584 "is_configured": true, 00:15:59.584 "data_offset": 0, 00:15:59.584 "data_size": 65536 00:15:59.584 } 00:15:59.584 ] 00:15:59.584 }' 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.584 17:32:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.845 [2024-12-07 17:32:33.213461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.845 BaseBdev1 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.845 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.105 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.105 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.105 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.105 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.105 [ 00:16:00.105 { 00:16:00.105 "name": "BaseBdev1", 00:16:00.105 "aliases": [ 00:16:00.105 "82a5627a-1109-43c7-a680-595edbf2f2f6" 00:16:00.105 ], 00:16:00.105 "product_name": "Malloc disk", 00:16:00.105 "block_size": 512, 00:16:00.105 "num_blocks": 65536, 00:16:00.105 "uuid": "82a5627a-1109-43c7-a680-595edbf2f2f6", 00:16:00.105 "assigned_rate_limits": { 00:16:00.105 "rw_ios_per_sec": 0, 00:16:00.105 "rw_mbytes_per_sec": 0, 00:16:00.105 "r_mbytes_per_sec": 0, 00:16:00.105 "w_mbytes_per_sec": 0 00:16:00.105 }, 00:16:00.105 "claimed": true, 00:16:00.105 "claim_type": "exclusive_write", 00:16:00.105 "zoned": false, 00:16:00.105 "supported_io_types": { 00:16:00.105 "read": true, 00:16:00.105 "write": true, 00:16:00.105 "unmap": true, 00:16:00.105 "flush": true, 00:16:00.105 "reset": true, 00:16:00.105 "nvme_admin": false, 00:16:00.105 "nvme_io": false, 00:16:00.105 "nvme_io_md": false, 00:16:00.105 "write_zeroes": true, 00:16:00.105 "zcopy": true, 00:16:00.105 "get_zone_info": false, 00:16:00.105 "zone_management": false, 00:16:00.105 "zone_append": false, 00:16:00.105 "compare": false, 00:16:00.105 "compare_and_write": false, 00:16:00.105 "abort": true, 00:16:00.105 "seek_hole": false, 00:16:00.105 "seek_data": false, 00:16:00.105 "copy": true, 00:16:00.105 "nvme_iov_md": false 00:16:00.105 }, 00:16:00.105 "memory_domains": [ 00:16:00.105 { 00:16:00.105 "dma_device_id": "system", 00:16:00.105 "dma_device_type": 1 00:16:00.105 }, 00:16:00.105 { 00:16:00.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.105 "dma_device_type": 2 00:16:00.105 } 00:16:00.105 ], 00:16:00.105 "driver_specific": {} 00:16:00.105 } 00:16:00.105 ] 00:16:00.105 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.106 "name": "Existed_Raid", 00:16:00.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.106 "strip_size_kb": 64, 00:16:00.106 "state": "configuring", 00:16:00.106 "raid_level": "raid5f", 00:16:00.106 "superblock": false, 00:16:00.106 "num_base_bdevs": 4, 00:16:00.106 "num_base_bdevs_discovered": 3, 00:16:00.106 "num_base_bdevs_operational": 4, 00:16:00.106 "base_bdevs_list": [ 00:16:00.106 { 00:16:00.106 "name": "BaseBdev1", 00:16:00.106 "uuid": "82a5627a-1109-43c7-a680-595edbf2f2f6", 00:16:00.106 "is_configured": true, 00:16:00.106 "data_offset": 0, 00:16:00.106 "data_size": 65536 00:16:00.106 }, 00:16:00.106 { 00:16:00.106 "name": null, 00:16:00.106 "uuid": "e8d759e1-de50-419f-ae9d-8ea05d72c679", 00:16:00.106 "is_configured": false, 00:16:00.106 "data_offset": 0, 00:16:00.106 "data_size": 65536 00:16:00.106 }, 00:16:00.106 { 00:16:00.106 "name": "BaseBdev3", 00:16:00.106 "uuid": "0367fb25-53fa-4fb6-ae86-d2000528eed1", 00:16:00.106 "is_configured": true, 00:16:00.106 "data_offset": 0, 00:16:00.106 "data_size": 65536 00:16:00.106 }, 00:16:00.106 { 00:16:00.106 "name": "BaseBdev4", 00:16:00.106 "uuid": "612149ce-e1da-4a2e-b5ab-53b2c3289086", 00:16:00.106 "is_configured": true, 00:16:00.106 "data_offset": 0, 00:16:00.106 "data_size": 65536 00:16:00.106 } 00:16:00.106 ] 00:16:00.106 }' 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.106 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.367 [2024-12-07 17:32:33.728673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.367 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.627 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.627 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.627 "name": "Existed_Raid", 00:16:00.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.628 "strip_size_kb": 64, 00:16:00.628 "state": "configuring", 00:16:00.628 "raid_level": "raid5f", 00:16:00.628 "superblock": false, 00:16:00.628 "num_base_bdevs": 4, 00:16:00.628 "num_base_bdevs_discovered": 2, 00:16:00.628 "num_base_bdevs_operational": 4, 00:16:00.628 "base_bdevs_list": [ 00:16:00.628 { 00:16:00.628 "name": "BaseBdev1", 00:16:00.628 "uuid": "82a5627a-1109-43c7-a680-595edbf2f2f6", 00:16:00.628 "is_configured": true, 00:16:00.628 "data_offset": 0, 00:16:00.628 "data_size": 65536 00:16:00.628 }, 00:16:00.628 { 00:16:00.628 "name": null, 00:16:00.628 "uuid": "e8d759e1-de50-419f-ae9d-8ea05d72c679", 00:16:00.628 "is_configured": false, 00:16:00.628 "data_offset": 0, 00:16:00.628 "data_size": 65536 00:16:00.628 }, 00:16:00.628 { 00:16:00.628 "name": null, 00:16:00.628 "uuid": "0367fb25-53fa-4fb6-ae86-d2000528eed1", 00:16:00.628 "is_configured": false, 00:16:00.628 "data_offset": 0, 00:16:00.628 "data_size": 65536 00:16:00.628 }, 00:16:00.628 { 00:16:00.628 "name": "BaseBdev4", 00:16:00.628 "uuid": "612149ce-e1da-4a2e-b5ab-53b2c3289086", 00:16:00.628 "is_configured": true, 00:16:00.628 "data_offset": 0, 00:16:00.628 "data_size": 65536 00:16:00.628 } 00:16:00.628 ] 00:16:00.628 }' 00:16:00.628 17:32:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.628 17:32:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.889 [2024-12-07 17:32:34.215811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.889 "name": "Existed_Raid", 00:16:00.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.889 "strip_size_kb": 64, 00:16:00.889 "state": "configuring", 00:16:00.889 "raid_level": "raid5f", 00:16:00.889 "superblock": false, 00:16:00.889 "num_base_bdevs": 4, 00:16:00.889 "num_base_bdevs_discovered": 3, 00:16:00.889 "num_base_bdevs_operational": 4, 00:16:00.889 "base_bdevs_list": [ 00:16:00.889 { 00:16:00.889 "name": "BaseBdev1", 00:16:00.889 "uuid": "82a5627a-1109-43c7-a680-595edbf2f2f6", 00:16:00.889 "is_configured": true, 00:16:00.889 "data_offset": 0, 00:16:00.889 "data_size": 65536 00:16:00.889 }, 00:16:00.889 { 00:16:00.889 "name": null, 00:16:00.889 "uuid": "e8d759e1-de50-419f-ae9d-8ea05d72c679", 00:16:00.889 "is_configured": false, 00:16:00.889 "data_offset": 0, 00:16:00.889 "data_size": 65536 00:16:00.889 }, 00:16:00.889 { 00:16:00.889 "name": "BaseBdev3", 00:16:00.889 "uuid": "0367fb25-53fa-4fb6-ae86-d2000528eed1", 00:16:00.889 "is_configured": true, 00:16:00.889 "data_offset": 0, 00:16:00.889 "data_size": 65536 00:16:00.889 }, 00:16:00.889 { 00:16:00.889 "name": "BaseBdev4", 00:16:00.889 "uuid": "612149ce-e1da-4a2e-b5ab-53b2c3289086", 00:16:00.889 "is_configured": true, 00:16:00.889 "data_offset": 0, 00:16:00.889 "data_size": 65536 00:16:00.889 } 00:16:00.889 ] 00:16:00.889 }' 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.889 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.460 [2024-12-07 17:32:34.659137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.460 "name": "Existed_Raid", 00:16:01.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.460 "strip_size_kb": 64, 00:16:01.460 "state": "configuring", 00:16:01.460 "raid_level": "raid5f", 00:16:01.460 "superblock": false, 00:16:01.460 "num_base_bdevs": 4, 00:16:01.460 "num_base_bdevs_discovered": 2, 00:16:01.460 "num_base_bdevs_operational": 4, 00:16:01.460 "base_bdevs_list": [ 00:16:01.460 { 00:16:01.460 "name": null, 00:16:01.460 "uuid": "82a5627a-1109-43c7-a680-595edbf2f2f6", 00:16:01.460 "is_configured": false, 00:16:01.460 "data_offset": 0, 00:16:01.460 "data_size": 65536 00:16:01.460 }, 00:16:01.460 { 00:16:01.460 "name": null, 00:16:01.460 "uuid": "e8d759e1-de50-419f-ae9d-8ea05d72c679", 00:16:01.460 "is_configured": false, 00:16:01.460 "data_offset": 0, 00:16:01.460 "data_size": 65536 00:16:01.460 }, 00:16:01.460 { 00:16:01.460 "name": "BaseBdev3", 00:16:01.460 "uuid": "0367fb25-53fa-4fb6-ae86-d2000528eed1", 00:16:01.460 "is_configured": true, 00:16:01.460 "data_offset": 0, 00:16:01.460 "data_size": 65536 00:16:01.460 }, 00:16:01.460 { 00:16:01.460 "name": "BaseBdev4", 00:16:01.460 "uuid": "612149ce-e1da-4a2e-b5ab-53b2c3289086", 00:16:01.460 "is_configured": true, 00:16:01.460 "data_offset": 0, 00:16:01.460 "data_size": 65536 00:16:01.460 } 00:16:01.460 ] 00:16:01.460 }' 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.460 17:32:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.030 [2024-12-07 17:32:35.203310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.030 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.030 "name": "Existed_Raid", 00:16:02.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.030 "strip_size_kb": 64, 00:16:02.030 "state": "configuring", 00:16:02.030 "raid_level": "raid5f", 00:16:02.030 "superblock": false, 00:16:02.030 "num_base_bdevs": 4, 00:16:02.030 "num_base_bdevs_discovered": 3, 00:16:02.030 "num_base_bdevs_operational": 4, 00:16:02.030 "base_bdevs_list": [ 00:16:02.030 { 00:16:02.030 "name": null, 00:16:02.030 "uuid": "82a5627a-1109-43c7-a680-595edbf2f2f6", 00:16:02.030 "is_configured": false, 00:16:02.030 "data_offset": 0, 00:16:02.030 "data_size": 65536 00:16:02.030 }, 00:16:02.030 { 00:16:02.030 "name": "BaseBdev2", 00:16:02.030 "uuid": "e8d759e1-de50-419f-ae9d-8ea05d72c679", 00:16:02.030 "is_configured": true, 00:16:02.030 "data_offset": 0, 00:16:02.031 "data_size": 65536 00:16:02.031 }, 00:16:02.031 { 00:16:02.031 "name": "BaseBdev3", 00:16:02.031 "uuid": "0367fb25-53fa-4fb6-ae86-d2000528eed1", 00:16:02.031 "is_configured": true, 00:16:02.031 "data_offset": 0, 00:16:02.031 "data_size": 65536 00:16:02.031 }, 00:16:02.031 { 00:16:02.031 "name": "BaseBdev4", 00:16:02.031 "uuid": "612149ce-e1da-4a2e-b5ab-53b2c3289086", 00:16:02.031 "is_configured": true, 00:16:02.031 "data_offset": 0, 00:16:02.031 "data_size": 65536 00:16:02.031 } 00:16:02.031 ] 00:16:02.031 }' 00:16:02.031 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.031 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.289 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.289 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:02.289 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.289 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.289 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.547 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:02.547 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.547 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:02.547 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.547 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 82a5627a-1109-43c7-a680-595edbf2f2f6 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.548 [2024-12-07 17:32:35.778172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:02.548 [2024-12-07 17:32:35.778226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:02.548 [2024-12-07 17:32:35.778234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:02.548 [2024-12-07 17:32:35.778486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:02.548 [2024-12-07 17:32:35.785541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:02.548 [2024-12-07 17:32:35.785600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:02.548 [2024-12-07 17:32:35.785884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.548 NewBaseBdev 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.548 [ 00:16:02.548 { 00:16:02.548 "name": "NewBaseBdev", 00:16:02.548 "aliases": [ 00:16:02.548 "82a5627a-1109-43c7-a680-595edbf2f2f6" 00:16:02.548 ], 00:16:02.548 "product_name": "Malloc disk", 00:16:02.548 "block_size": 512, 00:16:02.548 "num_blocks": 65536, 00:16:02.548 "uuid": "82a5627a-1109-43c7-a680-595edbf2f2f6", 00:16:02.548 "assigned_rate_limits": { 00:16:02.548 "rw_ios_per_sec": 0, 00:16:02.548 "rw_mbytes_per_sec": 0, 00:16:02.548 "r_mbytes_per_sec": 0, 00:16:02.548 "w_mbytes_per_sec": 0 00:16:02.548 }, 00:16:02.548 "claimed": true, 00:16:02.548 "claim_type": "exclusive_write", 00:16:02.548 "zoned": false, 00:16:02.548 "supported_io_types": { 00:16:02.548 "read": true, 00:16:02.548 "write": true, 00:16:02.548 "unmap": true, 00:16:02.548 "flush": true, 00:16:02.548 "reset": true, 00:16:02.548 "nvme_admin": false, 00:16:02.548 "nvme_io": false, 00:16:02.548 "nvme_io_md": false, 00:16:02.548 "write_zeroes": true, 00:16:02.548 "zcopy": true, 00:16:02.548 "get_zone_info": false, 00:16:02.548 "zone_management": false, 00:16:02.548 "zone_append": false, 00:16:02.548 "compare": false, 00:16:02.548 "compare_and_write": false, 00:16:02.548 "abort": true, 00:16:02.548 "seek_hole": false, 00:16:02.548 "seek_data": false, 00:16:02.548 "copy": true, 00:16:02.548 "nvme_iov_md": false 00:16:02.548 }, 00:16:02.548 "memory_domains": [ 00:16:02.548 { 00:16:02.548 "dma_device_id": "system", 00:16:02.548 "dma_device_type": 1 00:16:02.548 }, 00:16:02.548 { 00:16:02.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.548 "dma_device_type": 2 00:16:02.548 } 00:16:02.548 ], 00:16:02.548 "driver_specific": {} 00:16:02.548 } 00:16:02.548 ] 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.548 "name": "Existed_Raid", 00:16:02.548 "uuid": "208bff19-8336-464f-9546-55574738735b", 00:16:02.548 "strip_size_kb": 64, 00:16:02.548 "state": "online", 00:16:02.548 "raid_level": "raid5f", 00:16:02.548 "superblock": false, 00:16:02.548 "num_base_bdevs": 4, 00:16:02.548 "num_base_bdevs_discovered": 4, 00:16:02.548 "num_base_bdevs_operational": 4, 00:16:02.548 "base_bdevs_list": [ 00:16:02.548 { 00:16:02.548 "name": "NewBaseBdev", 00:16:02.548 "uuid": "82a5627a-1109-43c7-a680-595edbf2f2f6", 00:16:02.548 "is_configured": true, 00:16:02.548 "data_offset": 0, 00:16:02.548 "data_size": 65536 00:16:02.548 }, 00:16:02.548 { 00:16:02.548 "name": "BaseBdev2", 00:16:02.548 "uuid": "e8d759e1-de50-419f-ae9d-8ea05d72c679", 00:16:02.548 "is_configured": true, 00:16:02.548 "data_offset": 0, 00:16:02.548 "data_size": 65536 00:16:02.548 }, 00:16:02.548 { 00:16:02.548 "name": "BaseBdev3", 00:16:02.548 "uuid": "0367fb25-53fa-4fb6-ae86-d2000528eed1", 00:16:02.548 "is_configured": true, 00:16:02.548 "data_offset": 0, 00:16:02.548 "data_size": 65536 00:16:02.548 }, 00:16:02.548 { 00:16:02.548 "name": "BaseBdev4", 00:16:02.548 "uuid": "612149ce-e1da-4a2e-b5ab-53b2c3289086", 00:16:02.548 "is_configured": true, 00:16:02.548 "data_offset": 0, 00:16:02.548 "data_size": 65536 00:16:02.548 } 00:16:02.548 ] 00:16:02.548 }' 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.548 17:32:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.116 [2024-12-07 17:32:36.261646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.116 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.117 "name": "Existed_Raid", 00:16:03.117 "aliases": [ 00:16:03.117 "208bff19-8336-464f-9546-55574738735b" 00:16:03.117 ], 00:16:03.117 "product_name": "Raid Volume", 00:16:03.117 "block_size": 512, 00:16:03.117 "num_blocks": 196608, 00:16:03.117 "uuid": "208bff19-8336-464f-9546-55574738735b", 00:16:03.117 "assigned_rate_limits": { 00:16:03.117 "rw_ios_per_sec": 0, 00:16:03.117 "rw_mbytes_per_sec": 0, 00:16:03.117 "r_mbytes_per_sec": 0, 00:16:03.117 "w_mbytes_per_sec": 0 00:16:03.117 }, 00:16:03.117 "claimed": false, 00:16:03.117 "zoned": false, 00:16:03.117 "supported_io_types": { 00:16:03.117 "read": true, 00:16:03.117 "write": true, 00:16:03.117 "unmap": false, 00:16:03.117 "flush": false, 00:16:03.117 "reset": true, 00:16:03.117 "nvme_admin": false, 00:16:03.117 "nvme_io": false, 00:16:03.117 "nvme_io_md": false, 00:16:03.117 "write_zeroes": true, 00:16:03.117 "zcopy": false, 00:16:03.117 "get_zone_info": false, 00:16:03.117 "zone_management": false, 00:16:03.117 "zone_append": false, 00:16:03.117 "compare": false, 00:16:03.117 "compare_and_write": false, 00:16:03.117 "abort": false, 00:16:03.117 "seek_hole": false, 00:16:03.117 "seek_data": false, 00:16:03.117 "copy": false, 00:16:03.117 "nvme_iov_md": false 00:16:03.117 }, 00:16:03.117 "driver_specific": { 00:16:03.117 "raid": { 00:16:03.117 "uuid": "208bff19-8336-464f-9546-55574738735b", 00:16:03.117 "strip_size_kb": 64, 00:16:03.117 "state": "online", 00:16:03.117 "raid_level": "raid5f", 00:16:03.117 "superblock": false, 00:16:03.117 "num_base_bdevs": 4, 00:16:03.117 "num_base_bdevs_discovered": 4, 00:16:03.117 "num_base_bdevs_operational": 4, 00:16:03.117 "base_bdevs_list": [ 00:16:03.117 { 00:16:03.117 "name": "NewBaseBdev", 00:16:03.117 "uuid": "82a5627a-1109-43c7-a680-595edbf2f2f6", 00:16:03.117 "is_configured": true, 00:16:03.117 "data_offset": 0, 00:16:03.117 "data_size": 65536 00:16:03.117 }, 00:16:03.117 { 00:16:03.117 "name": "BaseBdev2", 00:16:03.117 "uuid": "e8d759e1-de50-419f-ae9d-8ea05d72c679", 00:16:03.117 "is_configured": true, 00:16:03.117 "data_offset": 0, 00:16:03.117 "data_size": 65536 00:16:03.117 }, 00:16:03.117 { 00:16:03.117 "name": "BaseBdev3", 00:16:03.117 "uuid": "0367fb25-53fa-4fb6-ae86-d2000528eed1", 00:16:03.117 "is_configured": true, 00:16:03.117 "data_offset": 0, 00:16:03.117 "data_size": 65536 00:16:03.117 }, 00:16:03.117 { 00:16:03.117 "name": "BaseBdev4", 00:16:03.117 "uuid": "612149ce-e1da-4a2e-b5ab-53b2c3289086", 00:16:03.117 "is_configured": true, 00:16:03.117 "data_offset": 0, 00:16:03.117 "data_size": 65536 00:16:03.117 } 00:16:03.117 ] 00:16:03.117 } 00:16:03.117 } 00:16:03.117 }' 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:03.117 BaseBdev2 00:16:03.117 BaseBdev3 00:16:03.117 BaseBdev4' 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.377 [2024-12-07 17:32:36.584902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.377 [2024-12-07 17:32:36.584993] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.377 [2024-12-07 17:32:36.585073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.377 [2024-12-07 17:32:36.585395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.377 [2024-12-07 17:32:36.585407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82760 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82760 ']' 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82760 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82760 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.377 killing process with pid 82760 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82760' 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82760 00:16:03.377 [2024-12-07 17:32:36.631815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.377 17:32:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82760 00:16:03.637 [2024-12-07 17:32:37.002865] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.020 ************************************ 00:16:05.020 END TEST raid5f_state_function_test 00:16:05.020 ************************************ 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:05.020 00:16:05.020 real 0m11.378s 00:16:05.020 user 0m18.081s 00:16:05.020 sys 0m2.130s 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.020 17:32:38 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:05.020 17:32:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:05.020 17:32:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.020 17:32:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:05.020 ************************************ 00:16:05.020 START TEST raid5f_state_function_test_sb 00:16:05.020 ************************************ 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83426 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83426' 00:16:05.020 Process raid pid: 83426 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83426 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83426 ']' 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.020 17:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.020 [2024-12-07 17:32:38.260972] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:05.020 [2024-12-07 17:32:38.261081] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.280 [2024-12-07 17:32:38.433843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.280 [2024-12-07 17:32:38.543724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.540 [2024-12-07 17:32:38.742670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.540 [2024-12-07 17:32:38.742702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.800 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.801 [2024-12-07 17:32:39.082712] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.801 [2024-12-07 17:32:39.082769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.801 [2024-12-07 17:32:39.082778] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.801 [2024-12-07 17:32:39.082787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.801 [2024-12-07 17:32:39.082793] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.801 [2024-12-07 17:32:39.082802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.801 [2024-12-07 17:32:39.082808] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:05.801 [2024-12-07 17:32:39.082816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.801 "name": "Existed_Raid", 00:16:05.801 "uuid": "92efc4cb-cd0e-4644-905b-c6f84a036bae", 00:16:05.801 "strip_size_kb": 64, 00:16:05.801 "state": "configuring", 00:16:05.801 "raid_level": "raid5f", 00:16:05.801 "superblock": true, 00:16:05.801 "num_base_bdevs": 4, 00:16:05.801 "num_base_bdevs_discovered": 0, 00:16:05.801 "num_base_bdevs_operational": 4, 00:16:05.801 "base_bdevs_list": [ 00:16:05.801 { 00:16:05.801 "name": "BaseBdev1", 00:16:05.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.801 "is_configured": false, 00:16:05.801 "data_offset": 0, 00:16:05.801 "data_size": 0 00:16:05.801 }, 00:16:05.801 { 00:16:05.801 "name": "BaseBdev2", 00:16:05.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.801 "is_configured": false, 00:16:05.801 "data_offset": 0, 00:16:05.801 "data_size": 0 00:16:05.801 }, 00:16:05.801 { 00:16:05.801 "name": "BaseBdev3", 00:16:05.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.801 "is_configured": false, 00:16:05.801 "data_offset": 0, 00:16:05.801 "data_size": 0 00:16:05.801 }, 00:16:05.801 { 00:16:05.801 "name": "BaseBdev4", 00:16:05.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.801 "is_configured": false, 00:16:05.801 "data_offset": 0, 00:16:05.801 "data_size": 0 00:16:05.801 } 00:16:05.801 ] 00:16:05.801 }' 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.801 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.371 [2024-12-07 17:32:39.533849] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.371 [2024-12-07 17:32:39.533954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.371 [2024-12-07 17:32:39.545852] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.371 [2024-12-07 17:32:39.545943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.371 [2024-12-07 17:32:39.545982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.371 [2024-12-07 17:32:39.546008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.371 [2024-12-07 17:32:39.546035] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:06.371 [2024-12-07 17:32:39.546057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:06.371 [2024-12-07 17:32:39.546120] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:06.371 [2024-12-07 17:32:39.546143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.371 [2024-12-07 17:32:39.596165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.371 BaseBdev1 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.371 [ 00:16:06.371 { 00:16:06.371 "name": "BaseBdev1", 00:16:06.371 "aliases": [ 00:16:06.371 "925de3ce-1d2d-4526-b815-c2cea9ba60e5" 00:16:06.371 ], 00:16:06.371 "product_name": "Malloc disk", 00:16:06.371 "block_size": 512, 00:16:06.371 "num_blocks": 65536, 00:16:06.371 "uuid": "925de3ce-1d2d-4526-b815-c2cea9ba60e5", 00:16:06.371 "assigned_rate_limits": { 00:16:06.371 "rw_ios_per_sec": 0, 00:16:06.371 "rw_mbytes_per_sec": 0, 00:16:06.371 "r_mbytes_per_sec": 0, 00:16:06.371 "w_mbytes_per_sec": 0 00:16:06.371 }, 00:16:06.371 "claimed": true, 00:16:06.371 "claim_type": "exclusive_write", 00:16:06.371 "zoned": false, 00:16:06.371 "supported_io_types": { 00:16:06.371 "read": true, 00:16:06.371 "write": true, 00:16:06.371 "unmap": true, 00:16:06.371 "flush": true, 00:16:06.371 "reset": true, 00:16:06.371 "nvme_admin": false, 00:16:06.371 "nvme_io": false, 00:16:06.371 "nvme_io_md": false, 00:16:06.371 "write_zeroes": true, 00:16:06.371 "zcopy": true, 00:16:06.371 "get_zone_info": false, 00:16:06.371 "zone_management": false, 00:16:06.371 "zone_append": false, 00:16:06.371 "compare": false, 00:16:06.371 "compare_and_write": false, 00:16:06.371 "abort": true, 00:16:06.371 "seek_hole": false, 00:16:06.371 "seek_data": false, 00:16:06.371 "copy": true, 00:16:06.371 "nvme_iov_md": false 00:16:06.371 }, 00:16:06.371 "memory_domains": [ 00:16:06.371 { 00:16:06.371 "dma_device_id": "system", 00:16:06.371 "dma_device_type": 1 00:16:06.371 }, 00:16:06.371 { 00:16:06.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.371 "dma_device_type": 2 00:16:06.371 } 00:16:06.371 ], 00:16:06.371 "driver_specific": {} 00:16:06.371 } 00:16:06.371 ] 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.371 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.371 "name": "Existed_Raid", 00:16:06.371 "uuid": "2d74292d-550d-4647-9e79-719cbc2d3aef", 00:16:06.371 "strip_size_kb": 64, 00:16:06.371 "state": "configuring", 00:16:06.371 "raid_level": "raid5f", 00:16:06.371 "superblock": true, 00:16:06.371 "num_base_bdevs": 4, 00:16:06.371 "num_base_bdevs_discovered": 1, 00:16:06.371 "num_base_bdevs_operational": 4, 00:16:06.371 "base_bdevs_list": [ 00:16:06.371 { 00:16:06.372 "name": "BaseBdev1", 00:16:06.372 "uuid": "925de3ce-1d2d-4526-b815-c2cea9ba60e5", 00:16:06.372 "is_configured": true, 00:16:06.372 "data_offset": 2048, 00:16:06.372 "data_size": 63488 00:16:06.372 }, 00:16:06.372 { 00:16:06.372 "name": "BaseBdev2", 00:16:06.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.372 "is_configured": false, 00:16:06.372 "data_offset": 0, 00:16:06.372 "data_size": 0 00:16:06.372 }, 00:16:06.372 { 00:16:06.372 "name": "BaseBdev3", 00:16:06.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.372 "is_configured": false, 00:16:06.372 "data_offset": 0, 00:16:06.372 "data_size": 0 00:16:06.372 }, 00:16:06.372 { 00:16:06.372 "name": "BaseBdev4", 00:16:06.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.372 "is_configured": false, 00:16:06.372 "data_offset": 0, 00:16:06.372 "data_size": 0 00:16:06.372 } 00:16:06.372 ] 00:16:06.372 }' 00:16:06.372 17:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.372 17:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.942 [2024-12-07 17:32:40.083609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.942 [2024-12-07 17:32:40.083717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.942 [2024-12-07 17:32:40.095661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.942 [2024-12-07 17:32:40.097476] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.942 [2024-12-07 17:32:40.097519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.942 [2024-12-07 17:32:40.097530] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:06.942 [2024-12-07 17:32:40.097540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:06.942 [2024-12-07 17:32:40.097547] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:06.942 [2024-12-07 17:32:40.097556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.942 "name": "Existed_Raid", 00:16:06.942 "uuid": "7d30dfd4-401c-4484-91b0-ec3f1b9ae57f", 00:16:06.942 "strip_size_kb": 64, 00:16:06.942 "state": "configuring", 00:16:06.942 "raid_level": "raid5f", 00:16:06.942 "superblock": true, 00:16:06.942 "num_base_bdevs": 4, 00:16:06.942 "num_base_bdevs_discovered": 1, 00:16:06.942 "num_base_bdevs_operational": 4, 00:16:06.942 "base_bdevs_list": [ 00:16:06.942 { 00:16:06.942 "name": "BaseBdev1", 00:16:06.942 "uuid": "925de3ce-1d2d-4526-b815-c2cea9ba60e5", 00:16:06.942 "is_configured": true, 00:16:06.942 "data_offset": 2048, 00:16:06.942 "data_size": 63488 00:16:06.942 }, 00:16:06.942 { 00:16:06.942 "name": "BaseBdev2", 00:16:06.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.942 "is_configured": false, 00:16:06.942 "data_offset": 0, 00:16:06.942 "data_size": 0 00:16:06.942 }, 00:16:06.942 { 00:16:06.942 "name": "BaseBdev3", 00:16:06.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.942 "is_configured": false, 00:16:06.942 "data_offset": 0, 00:16:06.942 "data_size": 0 00:16:06.942 }, 00:16:06.942 { 00:16:06.942 "name": "BaseBdev4", 00:16:06.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.942 "is_configured": false, 00:16:06.942 "data_offset": 0, 00:16:06.942 "data_size": 0 00:16:06.942 } 00:16:06.942 ] 00:16:06.942 }' 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.942 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.202 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:07.202 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.202 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.462 [2024-12-07 17:32:40.607219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.462 BaseBdev2 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.462 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.463 [ 00:16:07.463 { 00:16:07.463 "name": "BaseBdev2", 00:16:07.463 "aliases": [ 00:16:07.463 "0de312a5-5470-4c11-ae58-ae4eec930285" 00:16:07.463 ], 00:16:07.463 "product_name": "Malloc disk", 00:16:07.463 "block_size": 512, 00:16:07.463 "num_blocks": 65536, 00:16:07.463 "uuid": "0de312a5-5470-4c11-ae58-ae4eec930285", 00:16:07.463 "assigned_rate_limits": { 00:16:07.463 "rw_ios_per_sec": 0, 00:16:07.463 "rw_mbytes_per_sec": 0, 00:16:07.463 "r_mbytes_per_sec": 0, 00:16:07.463 "w_mbytes_per_sec": 0 00:16:07.463 }, 00:16:07.463 "claimed": true, 00:16:07.463 "claim_type": "exclusive_write", 00:16:07.463 "zoned": false, 00:16:07.463 "supported_io_types": { 00:16:07.463 "read": true, 00:16:07.463 "write": true, 00:16:07.463 "unmap": true, 00:16:07.463 "flush": true, 00:16:07.463 "reset": true, 00:16:07.463 "nvme_admin": false, 00:16:07.463 "nvme_io": false, 00:16:07.463 "nvme_io_md": false, 00:16:07.463 "write_zeroes": true, 00:16:07.463 "zcopy": true, 00:16:07.463 "get_zone_info": false, 00:16:07.463 "zone_management": false, 00:16:07.463 "zone_append": false, 00:16:07.463 "compare": false, 00:16:07.463 "compare_and_write": false, 00:16:07.463 "abort": true, 00:16:07.463 "seek_hole": false, 00:16:07.463 "seek_data": false, 00:16:07.463 "copy": true, 00:16:07.463 "nvme_iov_md": false 00:16:07.463 }, 00:16:07.463 "memory_domains": [ 00:16:07.463 { 00:16:07.463 "dma_device_id": "system", 00:16:07.463 "dma_device_type": 1 00:16:07.463 }, 00:16:07.463 { 00:16:07.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.463 "dma_device_type": 2 00:16:07.463 } 00:16:07.463 ], 00:16:07.463 "driver_specific": {} 00:16:07.463 } 00:16:07.463 ] 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.463 "name": "Existed_Raid", 00:16:07.463 "uuid": "7d30dfd4-401c-4484-91b0-ec3f1b9ae57f", 00:16:07.463 "strip_size_kb": 64, 00:16:07.463 "state": "configuring", 00:16:07.463 "raid_level": "raid5f", 00:16:07.463 "superblock": true, 00:16:07.463 "num_base_bdevs": 4, 00:16:07.463 "num_base_bdevs_discovered": 2, 00:16:07.463 "num_base_bdevs_operational": 4, 00:16:07.463 "base_bdevs_list": [ 00:16:07.463 { 00:16:07.463 "name": "BaseBdev1", 00:16:07.463 "uuid": "925de3ce-1d2d-4526-b815-c2cea9ba60e5", 00:16:07.463 "is_configured": true, 00:16:07.463 "data_offset": 2048, 00:16:07.463 "data_size": 63488 00:16:07.463 }, 00:16:07.463 { 00:16:07.463 "name": "BaseBdev2", 00:16:07.463 "uuid": "0de312a5-5470-4c11-ae58-ae4eec930285", 00:16:07.463 "is_configured": true, 00:16:07.463 "data_offset": 2048, 00:16:07.463 "data_size": 63488 00:16:07.463 }, 00:16:07.463 { 00:16:07.463 "name": "BaseBdev3", 00:16:07.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.463 "is_configured": false, 00:16:07.463 "data_offset": 0, 00:16:07.463 "data_size": 0 00:16:07.463 }, 00:16:07.463 { 00:16:07.463 "name": "BaseBdev4", 00:16:07.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.463 "is_configured": false, 00:16:07.463 "data_offset": 0, 00:16:07.463 "data_size": 0 00:16:07.463 } 00:16:07.463 ] 00:16:07.463 }' 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.463 17:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.724 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:07.724 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.724 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.984 [2024-12-07 17:32:41.140359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.984 BaseBdev3 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.984 [ 00:16:07.984 { 00:16:07.984 "name": "BaseBdev3", 00:16:07.984 "aliases": [ 00:16:07.984 "420ffe74-4eed-452f-bf4d-cdd95ef08d34" 00:16:07.984 ], 00:16:07.984 "product_name": "Malloc disk", 00:16:07.984 "block_size": 512, 00:16:07.984 "num_blocks": 65536, 00:16:07.984 "uuid": "420ffe74-4eed-452f-bf4d-cdd95ef08d34", 00:16:07.984 "assigned_rate_limits": { 00:16:07.984 "rw_ios_per_sec": 0, 00:16:07.984 "rw_mbytes_per_sec": 0, 00:16:07.984 "r_mbytes_per_sec": 0, 00:16:07.984 "w_mbytes_per_sec": 0 00:16:07.984 }, 00:16:07.984 "claimed": true, 00:16:07.984 "claim_type": "exclusive_write", 00:16:07.984 "zoned": false, 00:16:07.984 "supported_io_types": { 00:16:07.984 "read": true, 00:16:07.984 "write": true, 00:16:07.984 "unmap": true, 00:16:07.984 "flush": true, 00:16:07.984 "reset": true, 00:16:07.984 "nvme_admin": false, 00:16:07.984 "nvme_io": false, 00:16:07.984 "nvme_io_md": false, 00:16:07.984 "write_zeroes": true, 00:16:07.984 "zcopy": true, 00:16:07.984 "get_zone_info": false, 00:16:07.984 "zone_management": false, 00:16:07.984 "zone_append": false, 00:16:07.984 "compare": false, 00:16:07.984 "compare_and_write": false, 00:16:07.984 "abort": true, 00:16:07.984 "seek_hole": false, 00:16:07.984 "seek_data": false, 00:16:07.984 "copy": true, 00:16:07.984 "nvme_iov_md": false 00:16:07.984 }, 00:16:07.984 "memory_domains": [ 00:16:07.984 { 00:16:07.984 "dma_device_id": "system", 00:16:07.984 "dma_device_type": 1 00:16:07.984 }, 00:16:07.984 { 00:16:07.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.984 "dma_device_type": 2 00:16:07.984 } 00:16:07.984 ], 00:16:07.984 "driver_specific": {} 00:16:07.984 } 00:16:07.984 ] 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.984 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.984 "name": "Existed_Raid", 00:16:07.984 "uuid": "7d30dfd4-401c-4484-91b0-ec3f1b9ae57f", 00:16:07.984 "strip_size_kb": 64, 00:16:07.984 "state": "configuring", 00:16:07.984 "raid_level": "raid5f", 00:16:07.984 "superblock": true, 00:16:07.984 "num_base_bdevs": 4, 00:16:07.984 "num_base_bdevs_discovered": 3, 00:16:07.985 "num_base_bdevs_operational": 4, 00:16:07.985 "base_bdevs_list": [ 00:16:07.985 { 00:16:07.985 "name": "BaseBdev1", 00:16:07.985 "uuid": "925de3ce-1d2d-4526-b815-c2cea9ba60e5", 00:16:07.985 "is_configured": true, 00:16:07.985 "data_offset": 2048, 00:16:07.985 "data_size": 63488 00:16:07.985 }, 00:16:07.985 { 00:16:07.985 "name": "BaseBdev2", 00:16:07.985 "uuid": "0de312a5-5470-4c11-ae58-ae4eec930285", 00:16:07.985 "is_configured": true, 00:16:07.985 "data_offset": 2048, 00:16:07.985 "data_size": 63488 00:16:07.985 }, 00:16:07.985 { 00:16:07.985 "name": "BaseBdev3", 00:16:07.985 "uuid": "420ffe74-4eed-452f-bf4d-cdd95ef08d34", 00:16:07.985 "is_configured": true, 00:16:07.985 "data_offset": 2048, 00:16:07.985 "data_size": 63488 00:16:07.985 }, 00:16:07.985 { 00:16:07.985 "name": "BaseBdev4", 00:16:07.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.985 "is_configured": false, 00:16:07.985 "data_offset": 0, 00:16:07.985 "data_size": 0 00:16:07.985 } 00:16:07.985 ] 00:16:07.985 }' 00:16:07.985 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.985 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.244 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:08.244 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.244 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.505 [2024-12-07 17:32:41.660499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:08.505 [2024-12-07 17:32:41.660783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:08.505 [2024-12-07 17:32:41.660799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:08.505 [2024-12-07 17:32:41.661101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:08.505 BaseBdev4 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.505 [2024-12-07 17:32:41.668916] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:08.505 [2024-12-07 17:32:41.668995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:08.505 [2024-12-07 17:32:41.669318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.505 [ 00:16:08.505 { 00:16:08.505 "name": "BaseBdev4", 00:16:08.505 "aliases": [ 00:16:08.505 "972ec8c9-4727-4def-b1b7-cdcd2dcfdac8" 00:16:08.505 ], 00:16:08.505 "product_name": "Malloc disk", 00:16:08.505 "block_size": 512, 00:16:08.505 "num_blocks": 65536, 00:16:08.505 "uuid": "972ec8c9-4727-4def-b1b7-cdcd2dcfdac8", 00:16:08.505 "assigned_rate_limits": { 00:16:08.505 "rw_ios_per_sec": 0, 00:16:08.505 "rw_mbytes_per_sec": 0, 00:16:08.505 "r_mbytes_per_sec": 0, 00:16:08.505 "w_mbytes_per_sec": 0 00:16:08.505 }, 00:16:08.505 "claimed": true, 00:16:08.505 "claim_type": "exclusive_write", 00:16:08.505 "zoned": false, 00:16:08.505 "supported_io_types": { 00:16:08.505 "read": true, 00:16:08.505 "write": true, 00:16:08.505 "unmap": true, 00:16:08.505 "flush": true, 00:16:08.505 "reset": true, 00:16:08.505 "nvme_admin": false, 00:16:08.505 "nvme_io": false, 00:16:08.505 "nvme_io_md": false, 00:16:08.505 "write_zeroes": true, 00:16:08.505 "zcopy": true, 00:16:08.505 "get_zone_info": false, 00:16:08.505 "zone_management": false, 00:16:08.505 "zone_append": false, 00:16:08.505 "compare": false, 00:16:08.505 "compare_and_write": false, 00:16:08.505 "abort": true, 00:16:08.505 "seek_hole": false, 00:16:08.505 "seek_data": false, 00:16:08.505 "copy": true, 00:16:08.505 "nvme_iov_md": false 00:16:08.505 }, 00:16:08.505 "memory_domains": [ 00:16:08.505 { 00:16:08.505 "dma_device_id": "system", 00:16:08.505 "dma_device_type": 1 00:16:08.505 }, 00:16:08.505 { 00:16:08.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.505 "dma_device_type": 2 00:16:08.505 } 00:16:08.505 ], 00:16:08.505 "driver_specific": {} 00:16:08.505 } 00:16:08.505 ] 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.505 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.505 "name": "Existed_Raid", 00:16:08.505 "uuid": "7d30dfd4-401c-4484-91b0-ec3f1b9ae57f", 00:16:08.505 "strip_size_kb": 64, 00:16:08.505 "state": "online", 00:16:08.505 "raid_level": "raid5f", 00:16:08.505 "superblock": true, 00:16:08.505 "num_base_bdevs": 4, 00:16:08.505 "num_base_bdevs_discovered": 4, 00:16:08.505 "num_base_bdevs_operational": 4, 00:16:08.505 "base_bdevs_list": [ 00:16:08.505 { 00:16:08.505 "name": "BaseBdev1", 00:16:08.505 "uuid": "925de3ce-1d2d-4526-b815-c2cea9ba60e5", 00:16:08.505 "is_configured": true, 00:16:08.505 "data_offset": 2048, 00:16:08.505 "data_size": 63488 00:16:08.505 }, 00:16:08.505 { 00:16:08.505 "name": "BaseBdev2", 00:16:08.505 "uuid": "0de312a5-5470-4c11-ae58-ae4eec930285", 00:16:08.505 "is_configured": true, 00:16:08.505 "data_offset": 2048, 00:16:08.505 "data_size": 63488 00:16:08.505 }, 00:16:08.505 { 00:16:08.505 "name": "BaseBdev3", 00:16:08.505 "uuid": "420ffe74-4eed-452f-bf4d-cdd95ef08d34", 00:16:08.505 "is_configured": true, 00:16:08.505 "data_offset": 2048, 00:16:08.505 "data_size": 63488 00:16:08.505 }, 00:16:08.505 { 00:16:08.505 "name": "BaseBdev4", 00:16:08.506 "uuid": "972ec8c9-4727-4def-b1b7-cdcd2dcfdac8", 00:16:08.506 "is_configured": true, 00:16:08.506 "data_offset": 2048, 00:16:08.506 "data_size": 63488 00:16:08.506 } 00:16:08.506 ] 00:16:08.506 }' 00:16:08.506 17:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.506 17:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:09.077 [2024-12-07 17:32:42.209132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:09.077 "name": "Existed_Raid", 00:16:09.077 "aliases": [ 00:16:09.077 "7d30dfd4-401c-4484-91b0-ec3f1b9ae57f" 00:16:09.077 ], 00:16:09.077 "product_name": "Raid Volume", 00:16:09.077 "block_size": 512, 00:16:09.077 "num_blocks": 190464, 00:16:09.077 "uuid": "7d30dfd4-401c-4484-91b0-ec3f1b9ae57f", 00:16:09.077 "assigned_rate_limits": { 00:16:09.077 "rw_ios_per_sec": 0, 00:16:09.077 "rw_mbytes_per_sec": 0, 00:16:09.077 "r_mbytes_per_sec": 0, 00:16:09.077 "w_mbytes_per_sec": 0 00:16:09.077 }, 00:16:09.077 "claimed": false, 00:16:09.077 "zoned": false, 00:16:09.077 "supported_io_types": { 00:16:09.077 "read": true, 00:16:09.077 "write": true, 00:16:09.077 "unmap": false, 00:16:09.077 "flush": false, 00:16:09.077 "reset": true, 00:16:09.077 "nvme_admin": false, 00:16:09.077 "nvme_io": false, 00:16:09.077 "nvme_io_md": false, 00:16:09.077 "write_zeroes": true, 00:16:09.077 "zcopy": false, 00:16:09.077 "get_zone_info": false, 00:16:09.077 "zone_management": false, 00:16:09.077 "zone_append": false, 00:16:09.077 "compare": false, 00:16:09.077 "compare_and_write": false, 00:16:09.077 "abort": false, 00:16:09.077 "seek_hole": false, 00:16:09.077 "seek_data": false, 00:16:09.077 "copy": false, 00:16:09.077 "nvme_iov_md": false 00:16:09.077 }, 00:16:09.077 "driver_specific": { 00:16:09.077 "raid": { 00:16:09.077 "uuid": "7d30dfd4-401c-4484-91b0-ec3f1b9ae57f", 00:16:09.077 "strip_size_kb": 64, 00:16:09.077 "state": "online", 00:16:09.077 "raid_level": "raid5f", 00:16:09.077 "superblock": true, 00:16:09.077 "num_base_bdevs": 4, 00:16:09.077 "num_base_bdevs_discovered": 4, 00:16:09.077 "num_base_bdevs_operational": 4, 00:16:09.077 "base_bdevs_list": [ 00:16:09.077 { 00:16:09.077 "name": "BaseBdev1", 00:16:09.077 "uuid": "925de3ce-1d2d-4526-b815-c2cea9ba60e5", 00:16:09.077 "is_configured": true, 00:16:09.077 "data_offset": 2048, 00:16:09.077 "data_size": 63488 00:16:09.077 }, 00:16:09.077 { 00:16:09.077 "name": "BaseBdev2", 00:16:09.077 "uuid": "0de312a5-5470-4c11-ae58-ae4eec930285", 00:16:09.077 "is_configured": true, 00:16:09.077 "data_offset": 2048, 00:16:09.077 "data_size": 63488 00:16:09.077 }, 00:16:09.077 { 00:16:09.077 "name": "BaseBdev3", 00:16:09.077 "uuid": "420ffe74-4eed-452f-bf4d-cdd95ef08d34", 00:16:09.077 "is_configured": true, 00:16:09.077 "data_offset": 2048, 00:16:09.077 "data_size": 63488 00:16:09.077 }, 00:16:09.077 { 00:16:09.077 "name": "BaseBdev4", 00:16:09.077 "uuid": "972ec8c9-4727-4def-b1b7-cdcd2dcfdac8", 00:16:09.077 "is_configured": true, 00:16:09.077 "data_offset": 2048, 00:16:09.077 "data_size": 63488 00:16:09.077 } 00:16:09.077 ] 00:16:09.077 } 00:16:09.077 } 00:16:09.077 }' 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:09.077 BaseBdev2 00:16:09.077 BaseBdev3 00:16:09.077 BaseBdev4' 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.077 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.078 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.078 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.338 [2024-12-07 17:32:42.536344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.338 "name": "Existed_Raid", 00:16:09.338 "uuid": "7d30dfd4-401c-4484-91b0-ec3f1b9ae57f", 00:16:09.338 "strip_size_kb": 64, 00:16:09.338 "state": "online", 00:16:09.338 "raid_level": "raid5f", 00:16:09.338 "superblock": true, 00:16:09.338 "num_base_bdevs": 4, 00:16:09.338 "num_base_bdevs_discovered": 3, 00:16:09.338 "num_base_bdevs_operational": 3, 00:16:09.338 "base_bdevs_list": [ 00:16:09.338 { 00:16:09.338 "name": null, 00:16:09.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.338 "is_configured": false, 00:16:09.338 "data_offset": 0, 00:16:09.338 "data_size": 63488 00:16:09.338 }, 00:16:09.338 { 00:16:09.338 "name": "BaseBdev2", 00:16:09.338 "uuid": "0de312a5-5470-4c11-ae58-ae4eec930285", 00:16:09.338 "is_configured": true, 00:16:09.338 "data_offset": 2048, 00:16:09.338 "data_size": 63488 00:16:09.338 }, 00:16:09.338 { 00:16:09.338 "name": "BaseBdev3", 00:16:09.338 "uuid": "420ffe74-4eed-452f-bf4d-cdd95ef08d34", 00:16:09.338 "is_configured": true, 00:16:09.338 "data_offset": 2048, 00:16:09.338 "data_size": 63488 00:16:09.338 }, 00:16:09.338 { 00:16:09.338 "name": "BaseBdev4", 00:16:09.338 "uuid": "972ec8c9-4727-4def-b1b7-cdcd2dcfdac8", 00:16:09.338 "is_configured": true, 00:16:09.338 "data_offset": 2048, 00:16:09.338 "data_size": 63488 00:16:09.338 } 00:16:09.338 ] 00:16:09.338 }' 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.338 17:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.909 [2024-12-07 17:32:43.141052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.909 [2024-12-07 17:32:43.141216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.909 [2024-12-07 17:32:43.232687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.909 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.169 [2024-12-07 17:32:43.292647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.169 [2024-12-07 17:32:43.447105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:10.169 [2024-12-07 17:32:43.447160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:10.169 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.440 BaseBdev2 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.440 [ 00:16:10.440 { 00:16:10.440 "name": "BaseBdev2", 00:16:10.440 "aliases": [ 00:16:10.440 "2623661d-49ad-45af-b123-76706f2a9978" 00:16:10.440 ], 00:16:10.440 "product_name": "Malloc disk", 00:16:10.440 "block_size": 512, 00:16:10.440 "num_blocks": 65536, 00:16:10.440 "uuid": "2623661d-49ad-45af-b123-76706f2a9978", 00:16:10.440 "assigned_rate_limits": { 00:16:10.440 "rw_ios_per_sec": 0, 00:16:10.440 "rw_mbytes_per_sec": 0, 00:16:10.440 "r_mbytes_per_sec": 0, 00:16:10.440 "w_mbytes_per_sec": 0 00:16:10.440 }, 00:16:10.440 "claimed": false, 00:16:10.440 "zoned": false, 00:16:10.440 "supported_io_types": { 00:16:10.440 "read": true, 00:16:10.440 "write": true, 00:16:10.440 "unmap": true, 00:16:10.440 "flush": true, 00:16:10.440 "reset": true, 00:16:10.440 "nvme_admin": false, 00:16:10.440 "nvme_io": false, 00:16:10.440 "nvme_io_md": false, 00:16:10.440 "write_zeroes": true, 00:16:10.440 "zcopy": true, 00:16:10.440 "get_zone_info": false, 00:16:10.440 "zone_management": false, 00:16:10.440 "zone_append": false, 00:16:10.440 "compare": false, 00:16:10.440 "compare_and_write": false, 00:16:10.440 "abort": true, 00:16:10.440 "seek_hole": false, 00:16:10.440 "seek_data": false, 00:16:10.440 "copy": true, 00:16:10.440 "nvme_iov_md": false 00:16:10.440 }, 00:16:10.440 "memory_domains": [ 00:16:10.440 { 00:16:10.440 "dma_device_id": "system", 00:16:10.440 "dma_device_type": 1 00:16:10.440 }, 00:16:10.440 { 00:16:10.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.440 "dma_device_type": 2 00:16:10.440 } 00:16:10.440 ], 00:16:10.440 "driver_specific": {} 00:16:10.440 } 00:16:10.440 ] 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.440 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.441 BaseBdev3 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.441 [ 00:16:10.441 { 00:16:10.441 "name": "BaseBdev3", 00:16:10.441 "aliases": [ 00:16:10.441 "f98570d9-1944-4421-9fc2-a58f1b092e63" 00:16:10.441 ], 00:16:10.441 "product_name": "Malloc disk", 00:16:10.441 "block_size": 512, 00:16:10.441 "num_blocks": 65536, 00:16:10.441 "uuid": "f98570d9-1944-4421-9fc2-a58f1b092e63", 00:16:10.441 "assigned_rate_limits": { 00:16:10.441 "rw_ios_per_sec": 0, 00:16:10.441 "rw_mbytes_per_sec": 0, 00:16:10.441 "r_mbytes_per_sec": 0, 00:16:10.441 "w_mbytes_per_sec": 0 00:16:10.441 }, 00:16:10.441 "claimed": false, 00:16:10.441 "zoned": false, 00:16:10.441 "supported_io_types": { 00:16:10.441 "read": true, 00:16:10.441 "write": true, 00:16:10.441 "unmap": true, 00:16:10.441 "flush": true, 00:16:10.441 "reset": true, 00:16:10.441 "nvme_admin": false, 00:16:10.441 "nvme_io": false, 00:16:10.441 "nvme_io_md": false, 00:16:10.441 "write_zeroes": true, 00:16:10.441 "zcopy": true, 00:16:10.441 "get_zone_info": false, 00:16:10.441 "zone_management": false, 00:16:10.441 "zone_append": false, 00:16:10.441 "compare": false, 00:16:10.441 "compare_and_write": false, 00:16:10.441 "abort": true, 00:16:10.441 "seek_hole": false, 00:16:10.441 "seek_data": false, 00:16:10.441 "copy": true, 00:16:10.441 "nvme_iov_md": false 00:16:10.441 }, 00:16:10.441 "memory_domains": [ 00:16:10.441 { 00:16:10.441 "dma_device_id": "system", 00:16:10.441 "dma_device_type": 1 00:16:10.441 }, 00:16:10.441 { 00:16:10.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.441 "dma_device_type": 2 00:16:10.441 } 00:16:10.441 ], 00:16:10.441 "driver_specific": {} 00:16:10.441 } 00:16:10.441 ] 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.441 BaseBdev4 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.441 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.717 [ 00:16:10.717 { 00:16:10.717 "name": "BaseBdev4", 00:16:10.717 "aliases": [ 00:16:10.717 "30d46417-654e-45a6-a5f2-fc40b07cfe29" 00:16:10.717 ], 00:16:10.718 "product_name": "Malloc disk", 00:16:10.718 "block_size": 512, 00:16:10.718 "num_blocks": 65536, 00:16:10.718 "uuid": "30d46417-654e-45a6-a5f2-fc40b07cfe29", 00:16:10.718 "assigned_rate_limits": { 00:16:10.718 "rw_ios_per_sec": 0, 00:16:10.718 "rw_mbytes_per_sec": 0, 00:16:10.718 "r_mbytes_per_sec": 0, 00:16:10.718 "w_mbytes_per_sec": 0 00:16:10.718 }, 00:16:10.718 "claimed": false, 00:16:10.718 "zoned": false, 00:16:10.718 "supported_io_types": { 00:16:10.718 "read": true, 00:16:10.718 "write": true, 00:16:10.718 "unmap": true, 00:16:10.718 "flush": true, 00:16:10.718 "reset": true, 00:16:10.718 "nvme_admin": false, 00:16:10.718 "nvme_io": false, 00:16:10.718 "nvme_io_md": false, 00:16:10.718 "write_zeroes": true, 00:16:10.718 "zcopy": true, 00:16:10.718 "get_zone_info": false, 00:16:10.718 "zone_management": false, 00:16:10.718 "zone_append": false, 00:16:10.718 "compare": false, 00:16:10.718 "compare_and_write": false, 00:16:10.718 "abort": true, 00:16:10.718 "seek_hole": false, 00:16:10.718 "seek_data": false, 00:16:10.718 "copy": true, 00:16:10.718 "nvme_iov_md": false 00:16:10.718 }, 00:16:10.718 "memory_domains": [ 00:16:10.718 { 00:16:10.718 "dma_device_id": "system", 00:16:10.718 "dma_device_type": 1 00:16:10.718 }, 00:16:10.718 { 00:16:10.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.718 "dma_device_type": 2 00:16:10.718 } 00:16:10.718 ], 00:16:10.718 "driver_specific": {} 00:16:10.718 } 00:16:10.718 ] 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.718 [2024-12-07 17:32:43.843927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:10.718 [2024-12-07 17:32:43.843994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:10.718 [2024-12-07 17:32:43.844016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.718 [2024-12-07 17:32:43.845816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.718 [2024-12-07 17:32:43.845871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.718 "name": "Existed_Raid", 00:16:10.718 "uuid": "9c03019d-ee9d-4632-9726-be66ae68a599", 00:16:10.718 "strip_size_kb": 64, 00:16:10.718 "state": "configuring", 00:16:10.718 "raid_level": "raid5f", 00:16:10.718 "superblock": true, 00:16:10.718 "num_base_bdevs": 4, 00:16:10.718 "num_base_bdevs_discovered": 3, 00:16:10.718 "num_base_bdevs_operational": 4, 00:16:10.718 "base_bdevs_list": [ 00:16:10.718 { 00:16:10.718 "name": "BaseBdev1", 00:16:10.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.718 "is_configured": false, 00:16:10.718 "data_offset": 0, 00:16:10.718 "data_size": 0 00:16:10.718 }, 00:16:10.718 { 00:16:10.718 "name": "BaseBdev2", 00:16:10.718 "uuid": "2623661d-49ad-45af-b123-76706f2a9978", 00:16:10.718 "is_configured": true, 00:16:10.718 "data_offset": 2048, 00:16:10.718 "data_size": 63488 00:16:10.718 }, 00:16:10.718 { 00:16:10.718 "name": "BaseBdev3", 00:16:10.718 "uuid": "f98570d9-1944-4421-9fc2-a58f1b092e63", 00:16:10.718 "is_configured": true, 00:16:10.718 "data_offset": 2048, 00:16:10.718 "data_size": 63488 00:16:10.718 }, 00:16:10.718 { 00:16:10.718 "name": "BaseBdev4", 00:16:10.718 "uuid": "30d46417-654e-45a6-a5f2-fc40b07cfe29", 00:16:10.718 "is_configured": true, 00:16:10.718 "data_offset": 2048, 00:16:10.718 "data_size": 63488 00:16:10.718 } 00:16:10.718 ] 00:16:10.718 }' 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.718 17:32:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.977 [2024-12-07 17:32:44.295185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.977 "name": "Existed_Raid", 00:16:10.977 "uuid": "9c03019d-ee9d-4632-9726-be66ae68a599", 00:16:10.977 "strip_size_kb": 64, 00:16:10.977 "state": "configuring", 00:16:10.977 "raid_level": "raid5f", 00:16:10.977 "superblock": true, 00:16:10.977 "num_base_bdevs": 4, 00:16:10.977 "num_base_bdevs_discovered": 2, 00:16:10.977 "num_base_bdevs_operational": 4, 00:16:10.977 "base_bdevs_list": [ 00:16:10.977 { 00:16:10.977 "name": "BaseBdev1", 00:16:10.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.977 "is_configured": false, 00:16:10.977 "data_offset": 0, 00:16:10.977 "data_size": 0 00:16:10.977 }, 00:16:10.977 { 00:16:10.977 "name": null, 00:16:10.977 "uuid": "2623661d-49ad-45af-b123-76706f2a9978", 00:16:10.977 "is_configured": false, 00:16:10.977 "data_offset": 0, 00:16:10.977 "data_size": 63488 00:16:10.977 }, 00:16:10.977 { 00:16:10.977 "name": "BaseBdev3", 00:16:10.977 "uuid": "f98570d9-1944-4421-9fc2-a58f1b092e63", 00:16:10.977 "is_configured": true, 00:16:10.977 "data_offset": 2048, 00:16:10.977 "data_size": 63488 00:16:10.977 }, 00:16:10.977 { 00:16:10.977 "name": "BaseBdev4", 00:16:10.977 "uuid": "30d46417-654e-45a6-a5f2-fc40b07cfe29", 00:16:10.977 "is_configured": true, 00:16:10.977 "data_offset": 2048, 00:16:10.977 "data_size": 63488 00:16:10.977 } 00:16:10.977 ] 00:16:10.977 }' 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.977 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.545 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.545 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.545 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.545 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:11.545 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.545 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:11.545 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:11.545 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.545 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.545 [2024-12-07 17:32:44.821923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.545 BaseBdev1 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.546 [ 00:16:11.546 { 00:16:11.546 "name": "BaseBdev1", 00:16:11.546 "aliases": [ 00:16:11.546 "8b5e3997-43fb-4000-850b-4d16f718a0d0" 00:16:11.546 ], 00:16:11.546 "product_name": "Malloc disk", 00:16:11.546 "block_size": 512, 00:16:11.546 "num_blocks": 65536, 00:16:11.546 "uuid": "8b5e3997-43fb-4000-850b-4d16f718a0d0", 00:16:11.546 "assigned_rate_limits": { 00:16:11.546 "rw_ios_per_sec": 0, 00:16:11.546 "rw_mbytes_per_sec": 0, 00:16:11.546 "r_mbytes_per_sec": 0, 00:16:11.546 "w_mbytes_per_sec": 0 00:16:11.546 }, 00:16:11.546 "claimed": true, 00:16:11.546 "claim_type": "exclusive_write", 00:16:11.546 "zoned": false, 00:16:11.546 "supported_io_types": { 00:16:11.546 "read": true, 00:16:11.546 "write": true, 00:16:11.546 "unmap": true, 00:16:11.546 "flush": true, 00:16:11.546 "reset": true, 00:16:11.546 "nvme_admin": false, 00:16:11.546 "nvme_io": false, 00:16:11.546 "nvme_io_md": false, 00:16:11.546 "write_zeroes": true, 00:16:11.546 "zcopy": true, 00:16:11.546 "get_zone_info": false, 00:16:11.546 "zone_management": false, 00:16:11.546 "zone_append": false, 00:16:11.546 "compare": false, 00:16:11.546 "compare_and_write": false, 00:16:11.546 "abort": true, 00:16:11.546 "seek_hole": false, 00:16:11.546 "seek_data": false, 00:16:11.546 "copy": true, 00:16:11.546 "nvme_iov_md": false 00:16:11.546 }, 00:16:11.546 "memory_domains": [ 00:16:11.546 { 00:16:11.546 "dma_device_id": "system", 00:16:11.546 "dma_device_type": 1 00:16:11.546 }, 00:16:11.546 { 00:16:11.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.546 "dma_device_type": 2 00:16:11.546 } 00:16:11.546 ], 00:16:11.546 "driver_specific": {} 00:16:11.546 } 00:16:11.546 ] 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.546 "name": "Existed_Raid", 00:16:11.546 "uuid": "9c03019d-ee9d-4632-9726-be66ae68a599", 00:16:11.546 "strip_size_kb": 64, 00:16:11.546 "state": "configuring", 00:16:11.546 "raid_level": "raid5f", 00:16:11.546 "superblock": true, 00:16:11.546 "num_base_bdevs": 4, 00:16:11.546 "num_base_bdevs_discovered": 3, 00:16:11.546 "num_base_bdevs_operational": 4, 00:16:11.546 "base_bdevs_list": [ 00:16:11.546 { 00:16:11.546 "name": "BaseBdev1", 00:16:11.546 "uuid": "8b5e3997-43fb-4000-850b-4d16f718a0d0", 00:16:11.546 "is_configured": true, 00:16:11.546 "data_offset": 2048, 00:16:11.546 "data_size": 63488 00:16:11.546 }, 00:16:11.546 { 00:16:11.546 "name": null, 00:16:11.546 "uuid": "2623661d-49ad-45af-b123-76706f2a9978", 00:16:11.546 "is_configured": false, 00:16:11.546 "data_offset": 0, 00:16:11.546 "data_size": 63488 00:16:11.546 }, 00:16:11.546 { 00:16:11.546 "name": "BaseBdev3", 00:16:11.546 "uuid": "f98570d9-1944-4421-9fc2-a58f1b092e63", 00:16:11.546 "is_configured": true, 00:16:11.546 "data_offset": 2048, 00:16:11.546 "data_size": 63488 00:16:11.546 }, 00:16:11.546 { 00:16:11.546 "name": "BaseBdev4", 00:16:11.546 "uuid": "30d46417-654e-45a6-a5f2-fc40b07cfe29", 00:16:11.546 "is_configured": true, 00:16:11.546 "data_offset": 2048, 00:16:11.546 "data_size": 63488 00:16:11.546 } 00:16:11.546 ] 00:16:11.546 }' 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.546 17:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.112 [2024-12-07 17:32:45.321129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.112 "name": "Existed_Raid", 00:16:12.112 "uuid": "9c03019d-ee9d-4632-9726-be66ae68a599", 00:16:12.112 "strip_size_kb": 64, 00:16:12.112 "state": "configuring", 00:16:12.112 "raid_level": "raid5f", 00:16:12.112 "superblock": true, 00:16:12.112 "num_base_bdevs": 4, 00:16:12.112 "num_base_bdevs_discovered": 2, 00:16:12.112 "num_base_bdevs_operational": 4, 00:16:12.112 "base_bdevs_list": [ 00:16:12.112 { 00:16:12.112 "name": "BaseBdev1", 00:16:12.112 "uuid": "8b5e3997-43fb-4000-850b-4d16f718a0d0", 00:16:12.112 "is_configured": true, 00:16:12.112 "data_offset": 2048, 00:16:12.112 "data_size": 63488 00:16:12.112 }, 00:16:12.112 { 00:16:12.112 "name": null, 00:16:12.112 "uuid": "2623661d-49ad-45af-b123-76706f2a9978", 00:16:12.112 "is_configured": false, 00:16:12.112 "data_offset": 0, 00:16:12.112 "data_size": 63488 00:16:12.112 }, 00:16:12.112 { 00:16:12.112 "name": null, 00:16:12.112 "uuid": "f98570d9-1944-4421-9fc2-a58f1b092e63", 00:16:12.112 "is_configured": false, 00:16:12.112 "data_offset": 0, 00:16:12.112 "data_size": 63488 00:16:12.112 }, 00:16:12.112 { 00:16:12.112 "name": "BaseBdev4", 00:16:12.112 "uuid": "30d46417-654e-45a6-a5f2-fc40b07cfe29", 00:16:12.112 "is_configured": true, 00:16:12.112 "data_offset": 2048, 00:16:12.112 "data_size": 63488 00:16:12.112 } 00:16:12.112 ] 00:16:12.112 }' 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.112 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.680 [2024-12-07 17:32:45.812315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.680 "name": "Existed_Raid", 00:16:12.680 "uuid": "9c03019d-ee9d-4632-9726-be66ae68a599", 00:16:12.680 "strip_size_kb": 64, 00:16:12.680 "state": "configuring", 00:16:12.680 "raid_level": "raid5f", 00:16:12.680 "superblock": true, 00:16:12.680 "num_base_bdevs": 4, 00:16:12.680 "num_base_bdevs_discovered": 3, 00:16:12.680 "num_base_bdevs_operational": 4, 00:16:12.680 "base_bdevs_list": [ 00:16:12.680 { 00:16:12.680 "name": "BaseBdev1", 00:16:12.680 "uuid": "8b5e3997-43fb-4000-850b-4d16f718a0d0", 00:16:12.680 "is_configured": true, 00:16:12.680 "data_offset": 2048, 00:16:12.680 "data_size": 63488 00:16:12.680 }, 00:16:12.680 { 00:16:12.680 "name": null, 00:16:12.680 "uuid": "2623661d-49ad-45af-b123-76706f2a9978", 00:16:12.680 "is_configured": false, 00:16:12.680 "data_offset": 0, 00:16:12.680 "data_size": 63488 00:16:12.680 }, 00:16:12.680 { 00:16:12.680 "name": "BaseBdev3", 00:16:12.680 "uuid": "f98570d9-1944-4421-9fc2-a58f1b092e63", 00:16:12.680 "is_configured": true, 00:16:12.680 "data_offset": 2048, 00:16:12.680 "data_size": 63488 00:16:12.680 }, 00:16:12.680 { 00:16:12.680 "name": "BaseBdev4", 00:16:12.680 "uuid": "30d46417-654e-45a6-a5f2-fc40b07cfe29", 00:16:12.680 "is_configured": true, 00:16:12.680 "data_offset": 2048, 00:16:12.680 "data_size": 63488 00:16:12.680 } 00:16:12.680 ] 00:16:12.680 }' 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.680 17:32:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.940 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:12.940 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.940 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.940 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.940 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.940 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:12.940 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:12.940 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.940 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.940 [2024-12-07 17:32:46.307556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.200 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.200 "name": "Existed_Raid", 00:16:13.200 "uuid": "9c03019d-ee9d-4632-9726-be66ae68a599", 00:16:13.200 "strip_size_kb": 64, 00:16:13.200 "state": "configuring", 00:16:13.200 "raid_level": "raid5f", 00:16:13.200 "superblock": true, 00:16:13.200 "num_base_bdevs": 4, 00:16:13.200 "num_base_bdevs_discovered": 2, 00:16:13.200 "num_base_bdevs_operational": 4, 00:16:13.201 "base_bdevs_list": [ 00:16:13.201 { 00:16:13.201 "name": null, 00:16:13.201 "uuid": "8b5e3997-43fb-4000-850b-4d16f718a0d0", 00:16:13.201 "is_configured": false, 00:16:13.201 "data_offset": 0, 00:16:13.201 "data_size": 63488 00:16:13.201 }, 00:16:13.201 { 00:16:13.201 "name": null, 00:16:13.201 "uuid": "2623661d-49ad-45af-b123-76706f2a9978", 00:16:13.201 "is_configured": false, 00:16:13.201 "data_offset": 0, 00:16:13.201 "data_size": 63488 00:16:13.201 }, 00:16:13.201 { 00:16:13.201 "name": "BaseBdev3", 00:16:13.201 "uuid": "f98570d9-1944-4421-9fc2-a58f1b092e63", 00:16:13.201 "is_configured": true, 00:16:13.201 "data_offset": 2048, 00:16:13.201 "data_size": 63488 00:16:13.201 }, 00:16:13.201 { 00:16:13.201 "name": "BaseBdev4", 00:16:13.201 "uuid": "30d46417-654e-45a6-a5f2-fc40b07cfe29", 00:16:13.201 "is_configured": true, 00:16:13.201 "data_offset": 2048, 00:16:13.201 "data_size": 63488 00:16:13.201 } 00:16:13.201 ] 00:16:13.201 }' 00:16:13.201 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.201 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.460 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.460 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.460 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:13.460 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.766 [2024-12-07 17:32:46.871332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.766 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.766 "name": "Existed_Raid", 00:16:13.766 "uuid": "9c03019d-ee9d-4632-9726-be66ae68a599", 00:16:13.766 "strip_size_kb": 64, 00:16:13.766 "state": "configuring", 00:16:13.766 "raid_level": "raid5f", 00:16:13.766 "superblock": true, 00:16:13.766 "num_base_bdevs": 4, 00:16:13.767 "num_base_bdevs_discovered": 3, 00:16:13.767 "num_base_bdevs_operational": 4, 00:16:13.767 "base_bdevs_list": [ 00:16:13.767 { 00:16:13.767 "name": null, 00:16:13.767 "uuid": "8b5e3997-43fb-4000-850b-4d16f718a0d0", 00:16:13.767 "is_configured": false, 00:16:13.767 "data_offset": 0, 00:16:13.767 "data_size": 63488 00:16:13.767 }, 00:16:13.767 { 00:16:13.767 "name": "BaseBdev2", 00:16:13.767 "uuid": "2623661d-49ad-45af-b123-76706f2a9978", 00:16:13.767 "is_configured": true, 00:16:13.767 "data_offset": 2048, 00:16:13.767 "data_size": 63488 00:16:13.767 }, 00:16:13.767 { 00:16:13.767 "name": "BaseBdev3", 00:16:13.767 "uuid": "f98570d9-1944-4421-9fc2-a58f1b092e63", 00:16:13.767 "is_configured": true, 00:16:13.767 "data_offset": 2048, 00:16:13.767 "data_size": 63488 00:16:13.767 }, 00:16:13.767 { 00:16:13.767 "name": "BaseBdev4", 00:16:13.767 "uuid": "30d46417-654e-45a6-a5f2-fc40b07cfe29", 00:16:13.767 "is_configured": true, 00:16:13.767 "data_offset": 2048, 00:16:13.767 "data_size": 63488 00:16:13.767 } 00:16:13.767 ] 00:16:13.767 }' 00:16:13.767 17:32:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.767 17:32:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8b5e3997-43fb-4000-850b-4d16f718a0d0 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.026 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.285 [2024-12-07 17:32:47.434138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:14.285 [2024-12-07 17:32:47.434388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:14.285 [2024-12-07 17:32:47.434401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:14.285 [2024-12-07 17:32:47.434658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:14.285 NewBaseBdev 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.285 [2024-12-07 17:32:47.442034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:14.285 [2024-12-07 17:32:47.442061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:14.285 [2024-12-07 17:32:47.442202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.285 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.286 [ 00:16:14.286 { 00:16:14.286 "name": "NewBaseBdev", 00:16:14.286 "aliases": [ 00:16:14.286 "8b5e3997-43fb-4000-850b-4d16f718a0d0" 00:16:14.286 ], 00:16:14.286 "product_name": "Malloc disk", 00:16:14.286 "block_size": 512, 00:16:14.286 "num_blocks": 65536, 00:16:14.286 "uuid": "8b5e3997-43fb-4000-850b-4d16f718a0d0", 00:16:14.286 "assigned_rate_limits": { 00:16:14.286 "rw_ios_per_sec": 0, 00:16:14.286 "rw_mbytes_per_sec": 0, 00:16:14.286 "r_mbytes_per_sec": 0, 00:16:14.286 "w_mbytes_per_sec": 0 00:16:14.286 }, 00:16:14.286 "claimed": true, 00:16:14.286 "claim_type": "exclusive_write", 00:16:14.286 "zoned": false, 00:16:14.286 "supported_io_types": { 00:16:14.286 "read": true, 00:16:14.286 "write": true, 00:16:14.286 "unmap": true, 00:16:14.286 "flush": true, 00:16:14.286 "reset": true, 00:16:14.286 "nvme_admin": false, 00:16:14.286 "nvme_io": false, 00:16:14.286 "nvme_io_md": false, 00:16:14.286 "write_zeroes": true, 00:16:14.286 "zcopy": true, 00:16:14.286 "get_zone_info": false, 00:16:14.286 "zone_management": false, 00:16:14.286 "zone_append": false, 00:16:14.286 "compare": false, 00:16:14.286 "compare_and_write": false, 00:16:14.286 "abort": true, 00:16:14.286 "seek_hole": false, 00:16:14.286 "seek_data": false, 00:16:14.286 "copy": true, 00:16:14.286 "nvme_iov_md": false 00:16:14.286 }, 00:16:14.286 "memory_domains": [ 00:16:14.286 { 00:16:14.286 "dma_device_id": "system", 00:16:14.286 "dma_device_type": 1 00:16:14.286 }, 00:16:14.286 { 00:16:14.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.286 "dma_device_type": 2 00:16:14.286 } 00:16:14.286 ], 00:16:14.286 "driver_specific": {} 00:16:14.286 } 00:16:14.286 ] 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.286 "name": "Existed_Raid", 00:16:14.286 "uuid": "9c03019d-ee9d-4632-9726-be66ae68a599", 00:16:14.286 "strip_size_kb": 64, 00:16:14.286 "state": "online", 00:16:14.286 "raid_level": "raid5f", 00:16:14.286 "superblock": true, 00:16:14.286 "num_base_bdevs": 4, 00:16:14.286 "num_base_bdevs_discovered": 4, 00:16:14.286 "num_base_bdevs_operational": 4, 00:16:14.286 "base_bdevs_list": [ 00:16:14.286 { 00:16:14.286 "name": "NewBaseBdev", 00:16:14.286 "uuid": "8b5e3997-43fb-4000-850b-4d16f718a0d0", 00:16:14.286 "is_configured": true, 00:16:14.286 "data_offset": 2048, 00:16:14.286 "data_size": 63488 00:16:14.286 }, 00:16:14.286 { 00:16:14.286 "name": "BaseBdev2", 00:16:14.286 "uuid": "2623661d-49ad-45af-b123-76706f2a9978", 00:16:14.286 "is_configured": true, 00:16:14.286 "data_offset": 2048, 00:16:14.286 "data_size": 63488 00:16:14.286 }, 00:16:14.286 { 00:16:14.286 "name": "BaseBdev3", 00:16:14.286 "uuid": "f98570d9-1944-4421-9fc2-a58f1b092e63", 00:16:14.286 "is_configured": true, 00:16:14.286 "data_offset": 2048, 00:16:14.286 "data_size": 63488 00:16:14.286 }, 00:16:14.286 { 00:16:14.286 "name": "BaseBdev4", 00:16:14.286 "uuid": "30d46417-654e-45a6-a5f2-fc40b07cfe29", 00:16:14.286 "is_configured": true, 00:16:14.286 "data_offset": 2048, 00:16:14.286 "data_size": 63488 00:16:14.286 } 00:16:14.286 ] 00:16:14.286 }' 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.286 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:14.546 [2024-12-07 17:32:47.849811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:14.546 "name": "Existed_Raid", 00:16:14.546 "aliases": [ 00:16:14.546 "9c03019d-ee9d-4632-9726-be66ae68a599" 00:16:14.546 ], 00:16:14.546 "product_name": "Raid Volume", 00:16:14.546 "block_size": 512, 00:16:14.546 "num_blocks": 190464, 00:16:14.546 "uuid": "9c03019d-ee9d-4632-9726-be66ae68a599", 00:16:14.546 "assigned_rate_limits": { 00:16:14.546 "rw_ios_per_sec": 0, 00:16:14.546 "rw_mbytes_per_sec": 0, 00:16:14.546 "r_mbytes_per_sec": 0, 00:16:14.546 "w_mbytes_per_sec": 0 00:16:14.546 }, 00:16:14.546 "claimed": false, 00:16:14.546 "zoned": false, 00:16:14.546 "supported_io_types": { 00:16:14.546 "read": true, 00:16:14.546 "write": true, 00:16:14.546 "unmap": false, 00:16:14.546 "flush": false, 00:16:14.546 "reset": true, 00:16:14.546 "nvme_admin": false, 00:16:14.546 "nvme_io": false, 00:16:14.546 "nvme_io_md": false, 00:16:14.546 "write_zeroes": true, 00:16:14.546 "zcopy": false, 00:16:14.546 "get_zone_info": false, 00:16:14.546 "zone_management": false, 00:16:14.546 "zone_append": false, 00:16:14.546 "compare": false, 00:16:14.546 "compare_and_write": false, 00:16:14.546 "abort": false, 00:16:14.546 "seek_hole": false, 00:16:14.546 "seek_data": false, 00:16:14.546 "copy": false, 00:16:14.546 "nvme_iov_md": false 00:16:14.546 }, 00:16:14.546 "driver_specific": { 00:16:14.546 "raid": { 00:16:14.546 "uuid": "9c03019d-ee9d-4632-9726-be66ae68a599", 00:16:14.546 "strip_size_kb": 64, 00:16:14.546 "state": "online", 00:16:14.546 "raid_level": "raid5f", 00:16:14.546 "superblock": true, 00:16:14.546 "num_base_bdevs": 4, 00:16:14.546 "num_base_bdevs_discovered": 4, 00:16:14.546 "num_base_bdevs_operational": 4, 00:16:14.546 "base_bdevs_list": [ 00:16:14.546 { 00:16:14.546 "name": "NewBaseBdev", 00:16:14.546 "uuid": "8b5e3997-43fb-4000-850b-4d16f718a0d0", 00:16:14.546 "is_configured": true, 00:16:14.546 "data_offset": 2048, 00:16:14.546 "data_size": 63488 00:16:14.546 }, 00:16:14.546 { 00:16:14.546 "name": "BaseBdev2", 00:16:14.546 "uuid": "2623661d-49ad-45af-b123-76706f2a9978", 00:16:14.546 "is_configured": true, 00:16:14.546 "data_offset": 2048, 00:16:14.546 "data_size": 63488 00:16:14.546 }, 00:16:14.546 { 00:16:14.546 "name": "BaseBdev3", 00:16:14.546 "uuid": "f98570d9-1944-4421-9fc2-a58f1b092e63", 00:16:14.546 "is_configured": true, 00:16:14.546 "data_offset": 2048, 00:16:14.546 "data_size": 63488 00:16:14.546 }, 00:16:14.546 { 00:16:14.546 "name": "BaseBdev4", 00:16:14.546 "uuid": "30d46417-654e-45a6-a5f2-fc40b07cfe29", 00:16:14.546 "is_configured": true, 00:16:14.546 "data_offset": 2048, 00:16:14.546 "data_size": 63488 00:16:14.546 } 00:16:14.546 ] 00:16:14.546 } 00:16:14.546 } 00:16:14.546 }' 00:16:14.546 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.806 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:14.806 BaseBdev2 00:16:14.806 BaseBdev3 00:16:14.806 BaseBdev4' 00:16:14.806 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.806 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:14.806 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.806 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:14.806 17:32:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.806 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.806 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.806 17:32:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:14.806 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.807 [2024-12-07 17:32:48.149050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.807 [2024-12-07 17:32:48.149083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.807 [2024-12-07 17:32:48.149152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.807 [2024-12-07 17:32:48.149440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.807 [2024-12-07 17:32:48.149455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83426 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83426 ']' 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83426 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83426 00:16:14.807 killing process with pid 83426 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83426' 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83426 00:16:14.807 [2024-12-07 17:32:48.185321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.807 17:32:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83426 00:16:15.377 [2024-12-07 17:32:48.554553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.318 ************************************ 00:16:16.318 END TEST raid5f_state_function_test_sb 00:16:16.318 ************************************ 00:16:16.318 17:32:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:16.318 00:16:16.318 real 0m11.466s 00:16:16.318 user 0m18.229s 00:16:16.318 sys 0m2.099s 00:16:16.318 17:32:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.318 17:32:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.318 17:32:49 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:16.318 17:32:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:16.318 17:32:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.318 17:32:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.579 ************************************ 00:16:16.579 START TEST raid5f_superblock_test 00:16:16.579 ************************************ 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84096 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84096 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84096 ']' 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.579 17:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.579 [2024-12-07 17:32:49.792786] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:16.579 [2024-12-07 17:32:49.792895] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84096 ] 00:16:16.839 [2024-12-07 17:32:49.965841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.839 [2024-12-07 17:32:50.073259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.100 [2024-12-07 17:32:50.261241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.100 [2024-12-07 17:32:50.261295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.365 malloc1 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.365 [2024-12-07 17:32:50.662101] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:17.365 [2024-12-07 17:32:50.662175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.365 [2024-12-07 17:32:50.662194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:17.365 [2024-12-07 17:32:50.662204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.365 [2024-12-07 17:32:50.664291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.365 [2024-12-07 17:32:50.664331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:17.365 pt1 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.365 malloc2 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.365 [2024-12-07 17:32:50.716276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.365 [2024-12-07 17:32:50.716329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.365 [2024-12-07 17:32:50.716368] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:17.365 [2024-12-07 17:32:50.716378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.365 [2024-12-07 17:32:50.718367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.365 [2024-12-07 17:32:50.718403] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.365 pt2 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.365 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.625 malloc3 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.625 [2024-12-07 17:32:50.809571] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:17.625 [2024-12-07 17:32:50.809623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.625 [2024-12-07 17:32:50.809642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:17.625 [2024-12-07 17:32:50.809651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.625 [2024-12-07 17:32:50.811684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.625 [2024-12-07 17:32:50.811722] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:17.625 pt3 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.625 malloc4 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.625 [2024-12-07 17:32:50.861680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:17.625 [2024-12-07 17:32:50.861736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.625 [2024-12-07 17:32:50.861754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:17.625 [2024-12-07 17:32:50.861763] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.625 [2024-12-07 17:32:50.863814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.625 [2024-12-07 17:32:50.863852] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:17.625 pt4 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.625 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.625 [2024-12-07 17:32:50.873695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:17.625 [2024-12-07 17:32:50.875442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.625 [2024-12-07 17:32:50.875542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:17.625 [2024-12-07 17:32:50.875591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:17.625 [2024-12-07 17:32:50.875774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:17.625 [2024-12-07 17:32:50.875793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:17.625 [2024-12-07 17:32:50.876046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:17.625 [2024-12-07 17:32:50.882630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:17.625 [2024-12-07 17:32:50.882654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:17.625 [2024-12-07 17:32:50.882835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.626 "name": "raid_bdev1", 00:16:17.626 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:17.626 "strip_size_kb": 64, 00:16:17.626 "state": "online", 00:16:17.626 "raid_level": "raid5f", 00:16:17.626 "superblock": true, 00:16:17.626 "num_base_bdevs": 4, 00:16:17.626 "num_base_bdevs_discovered": 4, 00:16:17.626 "num_base_bdevs_operational": 4, 00:16:17.626 "base_bdevs_list": [ 00:16:17.626 { 00:16:17.626 "name": "pt1", 00:16:17.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.626 "is_configured": true, 00:16:17.626 "data_offset": 2048, 00:16:17.626 "data_size": 63488 00:16:17.626 }, 00:16:17.626 { 00:16:17.626 "name": "pt2", 00:16:17.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.626 "is_configured": true, 00:16:17.626 "data_offset": 2048, 00:16:17.626 "data_size": 63488 00:16:17.626 }, 00:16:17.626 { 00:16:17.626 "name": "pt3", 00:16:17.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.626 "is_configured": true, 00:16:17.626 "data_offset": 2048, 00:16:17.626 "data_size": 63488 00:16:17.626 }, 00:16:17.626 { 00:16:17.626 "name": "pt4", 00:16:17.626 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.626 "is_configured": true, 00:16:17.626 "data_offset": 2048, 00:16:17.626 "data_size": 63488 00:16:17.626 } 00:16:17.626 ] 00:16:17.626 }' 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.626 17:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.194 [2024-12-07 17:32:51.302891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:18.194 "name": "raid_bdev1", 00:16:18.194 "aliases": [ 00:16:18.194 "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2" 00:16:18.194 ], 00:16:18.194 "product_name": "Raid Volume", 00:16:18.194 "block_size": 512, 00:16:18.194 "num_blocks": 190464, 00:16:18.194 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:18.194 "assigned_rate_limits": { 00:16:18.194 "rw_ios_per_sec": 0, 00:16:18.194 "rw_mbytes_per_sec": 0, 00:16:18.194 "r_mbytes_per_sec": 0, 00:16:18.194 "w_mbytes_per_sec": 0 00:16:18.194 }, 00:16:18.194 "claimed": false, 00:16:18.194 "zoned": false, 00:16:18.194 "supported_io_types": { 00:16:18.194 "read": true, 00:16:18.194 "write": true, 00:16:18.194 "unmap": false, 00:16:18.194 "flush": false, 00:16:18.194 "reset": true, 00:16:18.194 "nvme_admin": false, 00:16:18.194 "nvme_io": false, 00:16:18.194 "nvme_io_md": false, 00:16:18.194 "write_zeroes": true, 00:16:18.194 "zcopy": false, 00:16:18.194 "get_zone_info": false, 00:16:18.194 "zone_management": false, 00:16:18.194 "zone_append": false, 00:16:18.194 "compare": false, 00:16:18.194 "compare_and_write": false, 00:16:18.194 "abort": false, 00:16:18.194 "seek_hole": false, 00:16:18.194 "seek_data": false, 00:16:18.194 "copy": false, 00:16:18.194 "nvme_iov_md": false 00:16:18.194 }, 00:16:18.194 "driver_specific": { 00:16:18.194 "raid": { 00:16:18.194 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:18.194 "strip_size_kb": 64, 00:16:18.194 "state": "online", 00:16:18.194 "raid_level": "raid5f", 00:16:18.194 "superblock": true, 00:16:18.194 "num_base_bdevs": 4, 00:16:18.194 "num_base_bdevs_discovered": 4, 00:16:18.194 "num_base_bdevs_operational": 4, 00:16:18.194 "base_bdevs_list": [ 00:16:18.194 { 00:16:18.194 "name": "pt1", 00:16:18.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.194 "is_configured": true, 00:16:18.194 "data_offset": 2048, 00:16:18.194 "data_size": 63488 00:16:18.194 }, 00:16:18.194 { 00:16:18.194 "name": "pt2", 00:16:18.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.194 "is_configured": true, 00:16:18.194 "data_offset": 2048, 00:16:18.194 "data_size": 63488 00:16:18.194 }, 00:16:18.194 { 00:16:18.194 "name": "pt3", 00:16:18.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.194 "is_configured": true, 00:16:18.194 "data_offset": 2048, 00:16:18.194 "data_size": 63488 00:16:18.194 }, 00:16:18.194 { 00:16:18.194 "name": "pt4", 00:16:18.194 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.194 "is_configured": true, 00:16:18.194 "data_offset": 2048, 00:16:18.194 "data_size": 63488 00:16:18.194 } 00:16:18.194 ] 00:16:18.194 } 00:16:18.194 } 00:16:18.194 }' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:18.194 pt2 00:16:18.194 pt3 00:16:18.194 pt4' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.194 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.454 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.454 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.455 [2024-12-07 17:32:51.590313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=60ad1e69-bda6-47c7-95fa-b1ac1d9606a2 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 60ad1e69-bda6-47c7-95fa-b1ac1d9606a2 ']' 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.455 [2024-12-07 17:32:51.618106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.455 [2024-12-07 17:32:51.618131] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.455 [2024-12-07 17:32:51.618201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.455 [2024-12-07 17:32:51.618284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.455 [2024-12-07 17:32:51.618297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.455 [2024-12-07 17:32:51.781851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:18.455 [2024-12-07 17:32:51.783688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:18.455 [2024-12-07 17:32:51.783740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:18.455 [2024-12-07 17:32:51.783773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:18.455 [2024-12-07 17:32:51.783819] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:18.455 [2024-12-07 17:32:51.783862] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:18.455 [2024-12-07 17:32:51.783885] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:18.455 [2024-12-07 17:32:51.783904] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:18.455 [2024-12-07 17:32:51.783917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.455 [2024-12-07 17:32:51.783926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:18.455 request: 00:16:18.455 { 00:16:18.455 "name": "raid_bdev1", 00:16:18.455 "raid_level": "raid5f", 00:16:18.455 "base_bdevs": [ 00:16:18.455 "malloc1", 00:16:18.455 "malloc2", 00:16:18.455 "malloc3", 00:16:18.455 "malloc4" 00:16:18.455 ], 00:16:18.455 "strip_size_kb": 64, 00:16:18.455 "superblock": false, 00:16:18.455 "method": "bdev_raid_create", 00:16:18.455 "req_id": 1 00:16:18.455 } 00:16:18.455 Got JSON-RPC error response 00:16:18.455 response: 00:16:18.455 { 00:16:18.455 "code": -17, 00:16:18.455 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:18.455 } 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.455 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.715 [2024-12-07 17:32:51.845706] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:18.715 [2024-12-07 17:32:51.845756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.715 [2024-12-07 17:32:51.845771] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:18.715 [2024-12-07 17:32:51.845781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.715 [2024-12-07 17:32:51.847871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.715 [2024-12-07 17:32:51.847913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:18.715 [2024-12-07 17:32:51.847991] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:18.715 [2024-12-07 17:32:51.848043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:18.715 pt1 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.715 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.716 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.716 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.716 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.716 "name": "raid_bdev1", 00:16:18.716 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:18.716 "strip_size_kb": 64, 00:16:18.716 "state": "configuring", 00:16:18.716 "raid_level": "raid5f", 00:16:18.716 "superblock": true, 00:16:18.716 "num_base_bdevs": 4, 00:16:18.716 "num_base_bdevs_discovered": 1, 00:16:18.716 "num_base_bdevs_operational": 4, 00:16:18.716 "base_bdevs_list": [ 00:16:18.716 { 00:16:18.716 "name": "pt1", 00:16:18.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.716 "is_configured": true, 00:16:18.716 "data_offset": 2048, 00:16:18.716 "data_size": 63488 00:16:18.716 }, 00:16:18.716 { 00:16:18.716 "name": null, 00:16:18.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.716 "is_configured": false, 00:16:18.716 "data_offset": 2048, 00:16:18.716 "data_size": 63488 00:16:18.716 }, 00:16:18.716 { 00:16:18.716 "name": null, 00:16:18.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.716 "is_configured": false, 00:16:18.716 "data_offset": 2048, 00:16:18.716 "data_size": 63488 00:16:18.716 }, 00:16:18.716 { 00:16:18.716 "name": null, 00:16:18.716 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.716 "is_configured": false, 00:16:18.716 "data_offset": 2048, 00:16:18.716 "data_size": 63488 00:16:18.716 } 00:16:18.716 ] 00:16:18.716 }' 00:16:18.716 17:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.716 17:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.975 [2024-12-07 17:32:52.305034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.975 [2024-12-07 17:32:52.305108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.975 [2024-12-07 17:32:52.305129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:18.975 [2024-12-07 17:32:52.305140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.975 [2024-12-07 17:32:52.305601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.975 [2024-12-07 17:32:52.305630] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.975 [2024-12-07 17:32:52.305711] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:18.975 [2024-12-07 17:32:52.305740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.975 pt2 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.975 [2024-12-07 17:32:52.317010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.975 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.235 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.235 "name": "raid_bdev1", 00:16:19.235 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:19.235 "strip_size_kb": 64, 00:16:19.235 "state": "configuring", 00:16:19.235 "raid_level": "raid5f", 00:16:19.235 "superblock": true, 00:16:19.235 "num_base_bdevs": 4, 00:16:19.235 "num_base_bdevs_discovered": 1, 00:16:19.235 "num_base_bdevs_operational": 4, 00:16:19.235 "base_bdevs_list": [ 00:16:19.235 { 00:16:19.235 "name": "pt1", 00:16:19.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.235 "is_configured": true, 00:16:19.235 "data_offset": 2048, 00:16:19.235 "data_size": 63488 00:16:19.235 }, 00:16:19.235 { 00:16:19.235 "name": null, 00:16:19.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.235 "is_configured": false, 00:16:19.235 "data_offset": 0, 00:16:19.235 "data_size": 63488 00:16:19.235 }, 00:16:19.235 { 00:16:19.235 "name": null, 00:16:19.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.235 "is_configured": false, 00:16:19.235 "data_offset": 2048, 00:16:19.235 "data_size": 63488 00:16:19.235 }, 00:16:19.235 { 00:16:19.235 "name": null, 00:16:19.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.235 "is_configured": false, 00:16:19.235 "data_offset": 2048, 00:16:19.235 "data_size": 63488 00:16:19.235 } 00:16:19.235 ] 00:16:19.235 }' 00:16:19.235 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.235 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.495 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:19.495 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:19.495 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:19.495 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.495 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.496 [2024-12-07 17:32:52.812131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:19.496 [2024-12-07 17:32:52.812197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.496 [2024-12-07 17:32:52.812216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:19.496 [2024-12-07 17:32:52.812226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.496 [2024-12-07 17:32:52.812700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.496 [2024-12-07 17:32:52.812732] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:19.496 [2024-12-07 17:32:52.812814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:19.496 [2024-12-07 17:32:52.812841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.496 pt2 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.496 [2024-12-07 17:32:52.824102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:19.496 [2024-12-07 17:32:52.824154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.496 [2024-12-07 17:32:52.824178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:19.496 [2024-12-07 17:32:52.824190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.496 [2024-12-07 17:32:52.824548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.496 [2024-12-07 17:32:52.824578] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:19.496 [2024-12-07 17:32:52.824639] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:19.496 [2024-12-07 17:32:52.824667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:19.496 pt3 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.496 [2024-12-07 17:32:52.836058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:19.496 [2024-12-07 17:32:52.836101] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.496 [2024-12-07 17:32:52.836115] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:19.496 [2024-12-07 17:32:52.836123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.496 [2024-12-07 17:32:52.836464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.496 [2024-12-07 17:32:52.836494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:19.496 [2024-12-07 17:32:52.836551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:19.496 [2024-12-07 17:32:52.836587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:19.496 [2024-12-07 17:32:52.836736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:19.496 [2024-12-07 17:32:52.836752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:19.496 [2024-12-07 17:32:52.836985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:19.496 [2024-12-07 17:32:52.843710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:19.496 [2024-12-07 17:32:52.843735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:19.496 [2024-12-07 17:32:52.843903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.496 pt4 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.496 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.756 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.756 "name": "raid_bdev1", 00:16:19.756 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:19.756 "strip_size_kb": 64, 00:16:19.756 "state": "online", 00:16:19.756 "raid_level": "raid5f", 00:16:19.756 "superblock": true, 00:16:19.756 "num_base_bdevs": 4, 00:16:19.756 "num_base_bdevs_discovered": 4, 00:16:19.756 "num_base_bdevs_operational": 4, 00:16:19.756 "base_bdevs_list": [ 00:16:19.756 { 00:16:19.756 "name": "pt1", 00:16:19.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.756 "is_configured": true, 00:16:19.756 "data_offset": 2048, 00:16:19.756 "data_size": 63488 00:16:19.756 }, 00:16:19.756 { 00:16:19.756 "name": "pt2", 00:16:19.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.756 "is_configured": true, 00:16:19.756 "data_offset": 2048, 00:16:19.756 "data_size": 63488 00:16:19.756 }, 00:16:19.756 { 00:16:19.756 "name": "pt3", 00:16:19.756 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.756 "is_configured": true, 00:16:19.756 "data_offset": 2048, 00:16:19.756 "data_size": 63488 00:16:19.756 }, 00:16:19.756 { 00:16:19.756 "name": "pt4", 00:16:19.756 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.756 "is_configured": true, 00:16:19.756 "data_offset": 2048, 00:16:19.756 "data_size": 63488 00:16:19.756 } 00:16:19.756 ] 00:16:19.756 }' 00:16:19.756 17:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.756 17:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:20.015 [2024-12-07 17:32:53.287472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.015 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:20.015 "name": "raid_bdev1", 00:16:20.015 "aliases": [ 00:16:20.015 "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2" 00:16:20.015 ], 00:16:20.015 "product_name": "Raid Volume", 00:16:20.015 "block_size": 512, 00:16:20.015 "num_blocks": 190464, 00:16:20.015 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:20.015 "assigned_rate_limits": { 00:16:20.015 "rw_ios_per_sec": 0, 00:16:20.015 "rw_mbytes_per_sec": 0, 00:16:20.015 "r_mbytes_per_sec": 0, 00:16:20.015 "w_mbytes_per_sec": 0 00:16:20.015 }, 00:16:20.015 "claimed": false, 00:16:20.015 "zoned": false, 00:16:20.015 "supported_io_types": { 00:16:20.015 "read": true, 00:16:20.015 "write": true, 00:16:20.015 "unmap": false, 00:16:20.015 "flush": false, 00:16:20.015 "reset": true, 00:16:20.015 "nvme_admin": false, 00:16:20.015 "nvme_io": false, 00:16:20.015 "nvme_io_md": false, 00:16:20.015 "write_zeroes": true, 00:16:20.015 "zcopy": false, 00:16:20.015 "get_zone_info": false, 00:16:20.015 "zone_management": false, 00:16:20.015 "zone_append": false, 00:16:20.015 "compare": false, 00:16:20.015 "compare_and_write": false, 00:16:20.015 "abort": false, 00:16:20.015 "seek_hole": false, 00:16:20.015 "seek_data": false, 00:16:20.015 "copy": false, 00:16:20.015 "nvme_iov_md": false 00:16:20.015 }, 00:16:20.015 "driver_specific": { 00:16:20.015 "raid": { 00:16:20.015 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:20.015 "strip_size_kb": 64, 00:16:20.015 "state": "online", 00:16:20.015 "raid_level": "raid5f", 00:16:20.015 "superblock": true, 00:16:20.015 "num_base_bdevs": 4, 00:16:20.015 "num_base_bdevs_discovered": 4, 00:16:20.015 "num_base_bdevs_operational": 4, 00:16:20.015 "base_bdevs_list": [ 00:16:20.015 { 00:16:20.015 "name": "pt1", 00:16:20.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.015 "is_configured": true, 00:16:20.015 "data_offset": 2048, 00:16:20.015 "data_size": 63488 00:16:20.015 }, 00:16:20.015 { 00:16:20.015 "name": "pt2", 00:16:20.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.015 "is_configured": true, 00:16:20.015 "data_offset": 2048, 00:16:20.015 "data_size": 63488 00:16:20.015 }, 00:16:20.015 { 00:16:20.016 "name": "pt3", 00:16:20.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.016 "is_configured": true, 00:16:20.016 "data_offset": 2048, 00:16:20.016 "data_size": 63488 00:16:20.016 }, 00:16:20.016 { 00:16:20.016 "name": "pt4", 00:16:20.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.016 "is_configured": true, 00:16:20.016 "data_offset": 2048, 00:16:20.016 "data_size": 63488 00:16:20.016 } 00:16:20.016 ] 00:16:20.016 } 00:16:20.016 } 00:16:20.016 }' 00:16:20.016 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.016 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:20.016 pt2 00:16:20.016 pt3 00:16:20.016 pt4' 00:16:20.016 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:20.275 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:20.276 [2024-12-07 17:32:53.574906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 60ad1e69-bda6-47c7-95fa-b1ac1d9606a2 '!=' 60ad1e69-bda6-47c7-95fa-b1ac1d9606a2 ']' 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.276 [2024-12-07 17:32:53.618711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.276 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.536 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.536 "name": "raid_bdev1", 00:16:20.536 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:20.536 "strip_size_kb": 64, 00:16:20.536 "state": "online", 00:16:20.536 "raid_level": "raid5f", 00:16:20.536 "superblock": true, 00:16:20.536 "num_base_bdevs": 4, 00:16:20.536 "num_base_bdevs_discovered": 3, 00:16:20.536 "num_base_bdevs_operational": 3, 00:16:20.536 "base_bdevs_list": [ 00:16:20.536 { 00:16:20.536 "name": null, 00:16:20.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.536 "is_configured": false, 00:16:20.536 "data_offset": 0, 00:16:20.536 "data_size": 63488 00:16:20.536 }, 00:16:20.536 { 00:16:20.536 "name": "pt2", 00:16:20.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.536 "is_configured": true, 00:16:20.536 "data_offset": 2048, 00:16:20.536 "data_size": 63488 00:16:20.536 }, 00:16:20.536 { 00:16:20.536 "name": "pt3", 00:16:20.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.536 "is_configured": true, 00:16:20.536 "data_offset": 2048, 00:16:20.536 "data_size": 63488 00:16:20.536 }, 00:16:20.536 { 00:16:20.536 "name": "pt4", 00:16:20.536 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.536 "is_configured": true, 00:16:20.536 "data_offset": 2048, 00:16:20.536 "data_size": 63488 00:16:20.536 } 00:16:20.536 ] 00:16:20.536 }' 00:16:20.536 17:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.536 17:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.795 [2024-12-07 17:32:54.097896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.795 [2024-12-07 17:32:54.097926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.795 [2024-12-07 17:32:54.098023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.795 [2024-12-07 17:32:54.098097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.795 [2024-12-07 17:32:54.098106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:20.795 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:20.796 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.796 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.056 [2024-12-07 17:32:54.185729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:21.056 [2024-12-07 17:32:54.185780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.056 [2024-12-07 17:32:54.185796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:21.056 [2024-12-07 17:32:54.185805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.056 [2024-12-07 17:32:54.187988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.056 [2024-12-07 17:32:54.188024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:21.056 [2024-12-07 17:32:54.188102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:21.056 [2024-12-07 17:32:54.188144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.056 pt2 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.056 "name": "raid_bdev1", 00:16:21.056 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:21.056 "strip_size_kb": 64, 00:16:21.056 "state": "configuring", 00:16:21.056 "raid_level": "raid5f", 00:16:21.056 "superblock": true, 00:16:21.056 "num_base_bdevs": 4, 00:16:21.056 "num_base_bdevs_discovered": 1, 00:16:21.056 "num_base_bdevs_operational": 3, 00:16:21.056 "base_bdevs_list": [ 00:16:21.056 { 00:16:21.056 "name": null, 00:16:21.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.056 "is_configured": false, 00:16:21.056 "data_offset": 2048, 00:16:21.056 "data_size": 63488 00:16:21.056 }, 00:16:21.056 { 00:16:21.056 "name": "pt2", 00:16:21.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.056 "is_configured": true, 00:16:21.056 "data_offset": 2048, 00:16:21.056 "data_size": 63488 00:16:21.056 }, 00:16:21.056 { 00:16:21.056 "name": null, 00:16:21.056 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.056 "is_configured": false, 00:16:21.056 "data_offset": 2048, 00:16:21.056 "data_size": 63488 00:16:21.056 }, 00:16:21.056 { 00:16:21.056 "name": null, 00:16:21.056 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.056 "is_configured": false, 00:16:21.056 "data_offset": 2048, 00:16:21.056 "data_size": 63488 00:16:21.056 } 00:16:21.056 ] 00:16:21.056 }' 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.056 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.316 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:21.316 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:21.316 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:21.316 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.316 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.316 [2024-12-07 17:32:54.597080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:21.316 [2024-12-07 17:32:54.597165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.316 [2024-12-07 17:32:54.597191] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:21.316 [2024-12-07 17:32:54.597199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.316 [2024-12-07 17:32:54.597648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.317 [2024-12-07 17:32:54.597677] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:21.317 [2024-12-07 17:32:54.597765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:21.317 [2024-12-07 17:32:54.597792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:21.317 pt3 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.317 "name": "raid_bdev1", 00:16:21.317 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:21.317 "strip_size_kb": 64, 00:16:21.317 "state": "configuring", 00:16:21.317 "raid_level": "raid5f", 00:16:21.317 "superblock": true, 00:16:21.317 "num_base_bdevs": 4, 00:16:21.317 "num_base_bdevs_discovered": 2, 00:16:21.317 "num_base_bdevs_operational": 3, 00:16:21.317 "base_bdevs_list": [ 00:16:21.317 { 00:16:21.317 "name": null, 00:16:21.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.317 "is_configured": false, 00:16:21.317 "data_offset": 2048, 00:16:21.317 "data_size": 63488 00:16:21.317 }, 00:16:21.317 { 00:16:21.317 "name": "pt2", 00:16:21.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.317 "is_configured": true, 00:16:21.317 "data_offset": 2048, 00:16:21.317 "data_size": 63488 00:16:21.317 }, 00:16:21.317 { 00:16:21.317 "name": "pt3", 00:16:21.317 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.317 "is_configured": true, 00:16:21.317 "data_offset": 2048, 00:16:21.317 "data_size": 63488 00:16:21.317 }, 00:16:21.317 { 00:16:21.317 "name": null, 00:16:21.317 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.317 "is_configured": false, 00:16:21.317 "data_offset": 2048, 00:16:21.317 "data_size": 63488 00:16:21.317 } 00:16:21.317 ] 00:16:21.317 }' 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.317 17:32:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.885 [2024-12-07 17:32:55.032340] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:21.885 [2024-12-07 17:32:55.032404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.885 [2024-12-07 17:32:55.032426] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:21.885 [2024-12-07 17:32:55.032437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.885 [2024-12-07 17:32:55.032890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.885 [2024-12-07 17:32:55.032924] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:21.885 [2024-12-07 17:32:55.033020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:21.885 [2024-12-07 17:32:55.033050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:21.885 [2024-12-07 17:32:55.033222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:21.885 [2024-12-07 17:32:55.033238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:21.885 [2024-12-07 17:32:55.033479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:21.885 [2024-12-07 17:32:55.040875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:21.885 [2024-12-07 17:32:55.040904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:21.885 [2024-12-07 17:32:55.041206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.885 pt4 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.885 "name": "raid_bdev1", 00:16:21.885 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:21.885 "strip_size_kb": 64, 00:16:21.885 "state": "online", 00:16:21.885 "raid_level": "raid5f", 00:16:21.885 "superblock": true, 00:16:21.885 "num_base_bdevs": 4, 00:16:21.885 "num_base_bdevs_discovered": 3, 00:16:21.885 "num_base_bdevs_operational": 3, 00:16:21.885 "base_bdevs_list": [ 00:16:21.885 { 00:16:21.885 "name": null, 00:16:21.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.885 "is_configured": false, 00:16:21.885 "data_offset": 2048, 00:16:21.885 "data_size": 63488 00:16:21.885 }, 00:16:21.885 { 00:16:21.885 "name": "pt2", 00:16:21.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.885 "is_configured": true, 00:16:21.885 "data_offset": 2048, 00:16:21.885 "data_size": 63488 00:16:21.885 }, 00:16:21.885 { 00:16:21.885 "name": "pt3", 00:16:21.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.885 "is_configured": true, 00:16:21.885 "data_offset": 2048, 00:16:21.885 "data_size": 63488 00:16:21.885 }, 00:16:21.885 { 00:16:21.885 "name": "pt4", 00:16:21.885 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.885 "is_configured": true, 00:16:21.885 "data_offset": 2048, 00:16:21.885 "data_size": 63488 00:16:21.885 } 00:16:21.885 ] 00:16:21.885 }' 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.885 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.144 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:22.144 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.144 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.144 [2024-12-07 17:32:55.501274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.144 [2024-12-07 17:32:55.501304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.144 [2024-12-07 17:32:55.501375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.144 [2024-12-07 17:32:55.501446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.144 [2024-12-07 17:32:55.501458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:22.144 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.144 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.144 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:22.144 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.144 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.144 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.401 [2024-12-07 17:32:55.577136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:22.401 [2024-12-07 17:32:55.577202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.401 [2024-12-07 17:32:55.577227] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:22.401 [2024-12-07 17:32:55.577240] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.401 [2024-12-07 17:32:55.579478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.401 [2024-12-07 17:32:55.579554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:22.401 [2024-12-07 17:32:55.579646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:22.401 [2024-12-07 17:32:55.579700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:22.401 [2024-12-07 17:32:55.579844] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:22.401 [2024-12-07 17:32:55.579859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.401 [2024-12-07 17:32:55.579876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:22.401 [2024-12-07 17:32:55.579981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.401 [2024-12-07 17:32:55.580107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:22.401 pt1 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.401 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.402 "name": "raid_bdev1", 00:16:22.402 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:22.402 "strip_size_kb": 64, 00:16:22.402 "state": "configuring", 00:16:22.402 "raid_level": "raid5f", 00:16:22.402 "superblock": true, 00:16:22.402 "num_base_bdevs": 4, 00:16:22.402 "num_base_bdevs_discovered": 2, 00:16:22.402 "num_base_bdevs_operational": 3, 00:16:22.402 "base_bdevs_list": [ 00:16:22.402 { 00:16:22.402 "name": null, 00:16:22.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.402 "is_configured": false, 00:16:22.402 "data_offset": 2048, 00:16:22.402 "data_size": 63488 00:16:22.402 }, 00:16:22.402 { 00:16:22.402 "name": "pt2", 00:16:22.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.402 "is_configured": true, 00:16:22.402 "data_offset": 2048, 00:16:22.402 "data_size": 63488 00:16:22.402 }, 00:16:22.402 { 00:16:22.402 "name": "pt3", 00:16:22.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.402 "is_configured": true, 00:16:22.402 "data_offset": 2048, 00:16:22.402 "data_size": 63488 00:16:22.402 }, 00:16:22.402 { 00:16:22.402 "name": null, 00:16:22.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.402 "is_configured": false, 00:16:22.402 "data_offset": 2048, 00:16:22.402 "data_size": 63488 00:16:22.402 } 00:16:22.402 ] 00:16:22.402 }' 00:16:22.402 17:32:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.402 17:32:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.660 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:22.660 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.660 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.660 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:22.660 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.919 [2024-12-07 17:32:56.072345] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:22.919 [2024-12-07 17:32:56.072410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.919 [2024-12-07 17:32:56.072433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:22.919 [2024-12-07 17:32:56.072442] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.919 [2024-12-07 17:32:56.072884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.919 [2024-12-07 17:32:56.072911] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:22.919 [2024-12-07 17:32:56.073002] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:22.919 [2024-12-07 17:32:56.073032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:22.919 [2024-12-07 17:32:56.073170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:22.919 [2024-12-07 17:32:56.073184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:22.919 [2024-12-07 17:32:56.073435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:22.919 [2024-12-07 17:32:56.080848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:22.919 [2024-12-07 17:32:56.080876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:22.919 [2024-12-07 17:32:56.081168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.919 pt4 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.919 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.920 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.920 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.920 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.920 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.920 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.920 "name": "raid_bdev1", 00:16:22.920 "uuid": "60ad1e69-bda6-47c7-95fa-b1ac1d9606a2", 00:16:22.920 "strip_size_kb": 64, 00:16:22.920 "state": "online", 00:16:22.920 "raid_level": "raid5f", 00:16:22.920 "superblock": true, 00:16:22.920 "num_base_bdevs": 4, 00:16:22.920 "num_base_bdevs_discovered": 3, 00:16:22.920 "num_base_bdevs_operational": 3, 00:16:22.920 "base_bdevs_list": [ 00:16:22.920 { 00:16:22.920 "name": null, 00:16:22.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.920 "is_configured": false, 00:16:22.920 "data_offset": 2048, 00:16:22.920 "data_size": 63488 00:16:22.920 }, 00:16:22.920 { 00:16:22.920 "name": "pt2", 00:16:22.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.920 "is_configured": true, 00:16:22.920 "data_offset": 2048, 00:16:22.920 "data_size": 63488 00:16:22.920 }, 00:16:22.920 { 00:16:22.920 "name": "pt3", 00:16:22.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.920 "is_configured": true, 00:16:22.920 "data_offset": 2048, 00:16:22.920 "data_size": 63488 00:16:22.920 }, 00:16:22.920 { 00:16:22.920 "name": "pt4", 00:16:22.920 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.920 "is_configured": true, 00:16:22.920 "data_offset": 2048, 00:16:22.920 "data_size": 63488 00:16:22.920 } 00:16:22.920 ] 00:16:22.920 }' 00:16:22.920 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.920 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.179 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:23.179 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:23.179 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.179 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.179 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.438 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:23.439 [2024-12-07 17:32:56.565295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 60ad1e69-bda6-47c7-95fa-b1ac1d9606a2 '!=' 60ad1e69-bda6-47c7-95fa-b1ac1d9606a2 ']' 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84096 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84096 ']' 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84096 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84096 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.439 killing process with pid 84096 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84096' 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84096 00:16:23.439 [2024-12-07 17:32:56.648863] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:23.439 [2024-12-07 17:32:56.648971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.439 [2024-12-07 17:32:56.649072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.439 [2024-12-07 17:32:56.649094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:23.439 17:32:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84096 00:16:23.698 [2024-12-07 17:32:57.032290] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.080 17:32:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:25.080 00:16:25.080 real 0m8.392s 00:16:25.080 user 0m13.178s 00:16:25.080 sys 0m1.498s 00:16:25.080 17:32:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.080 17:32:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.080 ************************************ 00:16:25.080 END TEST raid5f_superblock_test 00:16:25.080 ************************************ 00:16:25.080 17:32:58 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:25.080 17:32:58 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:25.080 17:32:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:25.080 17:32:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.080 17:32:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.080 ************************************ 00:16:25.080 START TEST raid5f_rebuild_test 00:16:25.080 ************************************ 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84588 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84588 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84588 ']' 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.080 17:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.080 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:25.080 Zero copy mechanism will not be used. 00:16:25.080 [2024-12-07 17:32:58.269151] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:25.080 [2024-12-07 17:32:58.269264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84588 ] 00:16:25.080 [2024-12-07 17:32:58.439780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.340 [2024-12-07 17:32:58.542589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.608 [2024-12-07 17:32:58.726299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.608 [2024-12-07 17:32:58.726338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.885 BaseBdev1_malloc 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.885 [2024-12-07 17:32:59.135312] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:25.885 [2024-12-07 17:32:59.135374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.885 [2024-12-07 17:32:59.135396] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:25.885 [2024-12-07 17:32:59.135407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.885 [2024-12-07 17:32:59.137509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.885 [2024-12-07 17:32:59.137553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:25.885 BaseBdev1 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.885 BaseBdev2_malloc 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.885 [2024-12-07 17:32:59.191416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:25.885 [2024-12-07 17:32:59.191485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.885 [2024-12-07 17:32:59.191508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:25.885 [2024-12-07 17:32:59.191519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.885 [2024-12-07 17:32:59.193475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.885 [2024-12-07 17:32:59.193516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:25.885 BaseBdev2 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.885 BaseBdev3_malloc 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.885 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.144 [2024-12-07 17:32:59.265972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:26.144 [2024-12-07 17:32:59.266022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.144 [2024-12-07 17:32:59.266044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:26.144 [2024-12-07 17:32:59.266054] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.144 [2024-12-07 17:32:59.268167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.144 [2024-12-07 17:32:59.268210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:26.144 BaseBdev3 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.144 BaseBdev4_malloc 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.144 [2024-12-07 17:32:59.322175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:26.144 [2024-12-07 17:32:59.322234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.144 [2024-12-07 17:32:59.322254] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:26.144 [2024-12-07 17:32:59.322265] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.144 [2024-12-07 17:32:59.324270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.144 [2024-12-07 17:32:59.324314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:26.144 BaseBdev4 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.144 spare_malloc 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.144 spare_delay 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.144 [2024-12-07 17:32:59.390840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:26.144 [2024-12-07 17:32:59.390896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.144 [2024-12-07 17:32:59.390913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:26.144 [2024-12-07 17:32:59.390924] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.144 [2024-12-07 17:32:59.392982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.144 [2024-12-07 17:32:59.393022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:26.144 spare 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.144 [2024-12-07 17:32:59.402862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.144 [2024-12-07 17:32:59.404617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.144 [2024-12-07 17:32:59.404695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:26.144 [2024-12-07 17:32:59.404745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:26.144 [2024-12-07 17:32:59.404853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:26.144 [2024-12-07 17:32:59.404871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:26.144 [2024-12-07 17:32:59.405136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:26.144 [2024-12-07 17:32:59.412805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:26.144 [2024-12-07 17:32:59.412827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:26.144 [2024-12-07 17:32:59.413069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.144 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.145 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.145 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.145 "name": "raid_bdev1", 00:16:26.145 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:26.145 "strip_size_kb": 64, 00:16:26.145 "state": "online", 00:16:26.145 "raid_level": "raid5f", 00:16:26.145 "superblock": false, 00:16:26.145 "num_base_bdevs": 4, 00:16:26.145 "num_base_bdevs_discovered": 4, 00:16:26.145 "num_base_bdevs_operational": 4, 00:16:26.145 "base_bdevs_list": [ 00:16:26.145 { 00:16:26.145 "name": "BaseBdev1", 00:16:26.145 "uuid": "a89ac353-b948-568a-807f-f094ad08c574", 00:16:26.145 "is_configured": true, 00:16:26.145 "data_offset": 0, 00:16:26.145 "data_size": 65536 00:16:26.145 }, 00:16:26.145 { 00:16:26.145 "name": "BaseBdev2", 00:16:26.145 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:26.145 "is_configured": true, 00:16:26.145 "data_offset": 0, 00:16:26.145 "data_size": 65536 00:16:26.145 }, 00:16:26.145 { 00:16:26.145 "name": "BaseBdev3", 00:16:26.145 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:26.145 "is_configured": true, 00:16:26.145 "data_offset": 0, 00:16:26.145 "data_size": 65536 00:16:26.145 }, 00:16:26.145 { 00:16:26.145 "name": "BaseBdev4", 00:16:26.145 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:26.145 "is_configured": true, 00:16:26.145 "data_offset": 0, 00:16:26.145 "data_size": 65536 00:16:26.145 } 00:16:26.145 ] 00:16:26.145 }' 00:16:26.145 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.145 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.711 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:26.711 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.711 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.711 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.711 [2024-12-07 17:32:59.861001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.712 17:32:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:26.971 [2024-12-07 17:33:00.100457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:26.971 /dev/nbd0 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.971 1+0 records in 00:16:26.971 1+0 records out 00:16:26.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382086 s, 10.7 MB/s 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:26.971 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:27.541 512+0 records in 00:16:27.541 512+0 records out 00:16:27.541 100663296 bytes (101 MB, 96 MiB) copied, 0.490024 s, 205 MB/s 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.541 [2024-12-07 17:33:00.870698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.541 [2024-12-07 17:33:00.885080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.541 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.801 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.801 "name": "raid_bdev1", 00:16:27.801 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:27.801 "strip_size_kb": 64, 00:16:27.801 "state": "online", 00:16:27.801 "raid_level": "raid5f", 00:16:27.801 "superblock": false, 00:16:27.801 "num_base_bdevs": 4, 00:16:27.801 "num_base_bdevs_discovered": 3, 00:16:27.801 "num_base_bdevs_operational": 3, 00:16:27.801 "base_bdevs_list": [ 00:16:27.801 { 00:16:27.801 "name": null, 00:16:27.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.801 "is_configured": false, 00:16:27.801 "data_offset": 0, 00:16:27.801 "data_size": 65536 00:16:27.801 }, 00:16:27.801 { 00:16:27.801 "name": "BaseBdev2", 00:16:27.801 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:27.801 "is_configured": true, 00:16:27.801 "data_offset": 0, 00:16:27.801 "data_size": 65536 00:16:27.801 }, 00:16:27.801 { 00:16:27.801 "name": "BaseBdev3", 00:16:27.801 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:27.801 "is_configured": true, 00:16:27.801 "data_offset": 0, 00:16:27.801 "data_size": 65536 00:16:27.801 }, 00:16:27.801 { 00:16:27.801 "name": "BaseBdev4", 00:16:27.801 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:27.801 "is_configured": true, 00:16:27.801 "data_offset": 0, 00:16:27.801 "data_size": 65536 00:16:27.801 } 00:16:27.801 ] 00:16:27.801 }' 00:16:27.801 17:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.801 17:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.061 17:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.061 17:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.061 17:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.061 [2024-12-07 17:33:01.360275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.061 [2024-12-07 17:33:01.376719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:28.062 17:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.062 17:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:28.062 [2024-12-07 17:33:01.386184] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.436 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.437 "name": "raid_bdev1", 00:16:29.437 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:29.437 "strip_size_kb": 64, 00:16:29.437 "state": "online", 00:16:29.437 "raid_level": "raid5f", 00:16:29.437 "superblock": false, 00:16:29.437 "num_base_bdevs": 4, 00:16:29.437 "num_base_bdevs_discovered": 4, 00:16:29.437 "num_base_bdevs_operational": 4, 00:16:29.437 "process": { 00:16:29.437 "type": "rebuild", 00:16:29.437 "target": "spare", 00:16:29.437 "progress": { 00:16:29.437 "blocks": 19200, 00:16:29.437 "percent": 9 00:16:29.437 } 00:16:29.437 }, 00:16:29.437 "base_bdevs_list": [ 00:16:29.437 { 00:16:29.437 "name": "spare", 00:16:29.437 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:29.437 "is_configured": true, 00:16:29.437 "data_offset": 0, 00:16:29.437 "data_size": 65536 00:16:29.437 }, 00:16:29.437 { 00:16:29.437 "name": "BaseBdev2", 00:16:29.437 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:29.437 "is_configured": true, 00:16:29.437 "data_offset": 0, 00:16:29.437 "data_size": 65536 00:16:29.437 }, 00:16:29.437 { 00:16:29.437 "name": "BaseBdev3", 00:16:29.437 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:29.437 "is_configured": true, 00:16:29.437 "data_offset": 0, 00:16:29.437 "data_size": 65536 00:16:29.437 }, 00:16:29.437 { 00:16:29.437 "name": "BaseBdev4", 00:16:29.437 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:29.437 "is_configured": true, 00:16:29.437 "data_offset": 0, 00:16:29.437 "data_size": 65536 00:16:29.437 } 00:16:29.437 ] 00:16:29.437 }' 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.437 [2024-12-07 17:33:02.513084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.437 [2024-12-07 17:33:02.592923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.437 [2024-12-07 17:33:02.593027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.437 [2024-12-07 17:33:02.593045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.437 [2024-12-07 17:33:02.593059] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.437 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.437 "name": "raid_bdev1", 00:16:29.437 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:29.437 "strip_size_kb": 64, 00:16:29.437 "state": "online", 00:16:29.437 "raid_level": "raid5f", 00:16:29.437 "superblock": false, 00:16:29.437 "num_base_bdevs": 4, 00:16:29.437 "num_base_bdevs_discovered": 3, 00:16:29.437 "num_base_bdevs_operational": 3, 00:16:29.437 "base_bdevs_list": [ 00:16:29.437 { 00:16:29.437 "name": null, 00:16:29.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.437 "is_configured": false, 00:16:29.437 "data_offset": 0, 00:16:29.437 "data_size": 65536 00:16:29.437 }, 00:16:29.437 { 00:16:29.437 "name": "BaseBdev2", 00:16:29.437 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:29.437 "is_configured": true, 00:16:29.437 "data_offset": 0, 00:16:29.437 "data_size": 65536 00:16:29.437 }, 00:16:29.437 { 00:16:29.437 "name": "BaseBdev3", 00:16:29.437 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:29.437 "is_configured": true, 00:16:29.437 "data_offset": 0, 00:16:29.437 "data_size": 65536 00:16:29.438 }, 00:16:29.438 { 00:16:29.438 "name": "BaseBdev4", 00:16:29.438 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:29.438 "is_configured": true, 00:16:29.438 "data_offset": 0, 00:16:29.438 "data_size": 65536 00:16:29.438 } 00:16:29.438 ] 00:16:29.438 }' 00:16:29.438 17:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.438 17:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.697 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.697 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.697 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.697 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.697 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.697 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.697 17:33:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.697 17:33:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.697 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.697 17:33:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.957 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.957 "name": "raid_bdev1", 00:16:29.957 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:29.957 "strip_size_kb": 64, 00:16:29.957 "state": "online", 00:16:29.957 "raid_level": "raid5f", 00:16:29.957 "superblock": false, 00:16:29.957 "num_base_bdevs": 4, 00:16:29.957 "num_base_bdevs_discovered": 3, 00:16:29.957 "num_base_bdevs_operational": 3, 00:16:29.957 "base_bdevs_list": [ 00:16:29.957 { 00:16:29.957 "name": null, 00:16:29.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.957 "is_configured": false, 00:16:29.957 "data_offset": 0, 00:16:29.957 "data_size": 65536 00:16:29.957 }, 00:16:29.957 { 00:16:29.957 "name": "BaseBdev2", 00:16:29.957 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:29.957 "is_configured": true, 00:16:29.957 "data_offset": 0, 00:16:29.957 "data_size": 65536 00:16:29.957 }, 00:16:29.957 { 00:16:29.957 "name": "BaseBdev3", 00:16:29.957 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:29.957 "is_configured": true, 00:16:29.957 "data_offset": 0, 00:16:29.957 "data_size": 65536 00:16:29.957 }, 00:16:29.957 { 00:16:29.957 "name": "BaseBdev4", 00:16:29.957 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:29.957 "is_configured": true, 00:16:29.957 "data_offset": 0, 00:16:29.957 "data_size": 65536 00:16:29.957 } 00:16:29.957 ] 00:16:29.957 }' 00:16:29.957 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.957 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.957 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.957 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.957 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.957 17:33:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.957 17:33:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.957 [2024-12-07 17:33:03.201245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.957 [2024-12-07 17:33:03.215952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:29.957 17:33:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.957 17:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:29.957 [2024-12-07 17:33:03.224889] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.898 "name": "raid_bdev1", 00:16:30.898 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:30.898 "strip_size_kb": 64, 00:16:30.898 "state": "online", 00:16:30.898 "raid_level": "raid5f", 00:16:30.898 "superblock": false, 00:16:30.898 "num_base_bdevs": 4, 00:16:30.898 "num_base_bdevs_discovered": 4, 00:16:30.898 "num_base_bdevs_operational": 4, 00:16:30.898 "process": { 00:16:30.898 "type": "rebuild", 00:16:30.898 "target": "spare", 00:16:30.898 "progress": { 00:16:30.898 "blocks": 19200, 00:16:30.898 "percent": 9 00:16:30.898 } 00:16:30.898 }, 00:16:30.898 "base_bdevs_list": [ 00:16:30.898 { 00:16:30.898 "name": "spare", 00:16:30.898 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:30.898 "is_configured": true, 00:16:30.898 "data_offset": 0, 00:16:30.898 "data_size": 65536 00:16:30.898 }, 00:16:30.898 { 00:16:30.898 "name": "BaseBdev2", 00:16:30.898 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:30.898 "is_configured": true, 00:16:30.898 "data_offset": 0, 00:16:30.898 "data_size": 65536 00:16:30.898 }, 00:16:30.898 { 00:16:30.898 "name": "BaseBdev3", 00:16:30.898 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:30.898 "is_configured": true, 00:16:30.898 "data_offset": 0, 00:16:30.898 "data_size": 65536 00:16:30.898 }, 00:16:30.898 { 00:16:30.898 "name": "BaseBdev4", 00:16:30.898 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:30.898 "is_configured": true, 00:16:30.898 "data_offset": 0, 00:16:30.898 "data_size": 65536 00:16:30.898 } 00:16:30.898 ] 00:16:30.898 }' 00:16:30.898 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=615 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.158 "name": "raid_bdev1", 00:16:31.158 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:31.158 "strip_size_kb": 64, 00:16:31.158 "state": "online", 00:16:31.158 "raid_level": "raid5f", 00:16:31.158 "superblock": false, 00:16:31.158 "num_base_bdevs": 4, 00:16:31.158 "num_base_bdevs_discovered": 4, 00:16:31.158 "num_base_bdevs_operational": 4, 00:16:31.158 "process": { 00:16:31.158 "type": "rebuild", 00:16:31.158 "target": "spare", 00:16:31.158 "progress": { 00:16:31.158 "blocks": 21120, 00:16:31.158 "percent": 10 00:16:31.158 } 00:16:31.158 }, 00:16:31.158 "base_bdevs_list": [ 00:16:31.158 { 00:16:31.158 "name": "spare", 00:16:31.158 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:31.158 "is_configured": true, 00:16:31.158 "data_offset": 0, 00:16:31.158 "data_size": 65536 00:16:31.158 }, 00:16:31.158 { 00:16:31.158 "name": "BaseBdev2", 00:16:31.158 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:31.158 "is_configured": true, 00:16:31.158 "data_offset": 0, 00:16:31.158 "data_size": 65536 00:16:31.158 }, 00:16:31.158 { 00:16:31.158 "name": "BaseBdev3", 00:16:31.158 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:31.158 "is_configured": true, 00:16:31.158 "data_offset": 0, 00:16:31.158 "data_size": 65536 00:16:31.158 }, 00:16:31.158 { 00:16:31.158 "name": "BaseBdev4", 00:16:31.158 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:31.158 "is_configured": true, 00:16:31.158 "data_offset": 0, 00:16:31.158 "data_size": 65536 00:16:31.158 } 00:16:31.158 ] 00:16:31.158 }' 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.158 17:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.537 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.537 "name": "raid_bdev1", 00:16:32.538 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:32.538 "strip_size_kb": 64, 00:16:32.538 "state": "online", 00:16:32.538 "raid_level": "raid5f", 00:16:32.538 "superblock": false, 00:16:32.538 "num_base_bdevs": 4, 00:16:32.538 "num_base_bdevs_discovered": 4, 00:16:32.538 "num_base_bdevs_operational": 4, 00:16:32.538 "process": { 00:16:32.538 "type": "rebuild", 00:16:32.538 "target": "spare", 00:16:32.538 "progress": { 00:16:32.538 "blocks": 42240, 00:16:32.538 "percent": 21 00:16:32.538 } 00:16:32.538 }, 00:16:32.538 "base_bdevs_list": [ 00:16:32.538 { 00:16:32.538 "name": "spare", 00:16:32.538 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:32.538 "is_configured": true, 00:16:32.538 "data_offset": 0, 00:16:32.538 "data_size": 65536 00:16:32.538 }, 00:16:32.538 { 00:16:32.538 "name": "BaseBdev2", 00:16:32.538 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:32.538 "is_configured": true, 00:16:32.538 "data_offset": 0, 00:16:32.538 "data_size": 65536 00:16:32.538 }, 00:16:32.538 { 00:16:32.538 "name": "BaseBdev3", 00:16:32.538 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:32.538 "is_configured": true, 00:16:32.538 "data_offset": 0, 00:16:32.538 "data_size": 65536 00:16:32.538 }, 00:16:32.538 { 00:16:32.538 "name": "BaseBdev4", 00:16:32.538 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:32.538 "is_configured": true, 00:16:32.538 "data_offset": 0, 00:16:32.538 "data_size": 65536 00:16:32.538 } 00:16:32.538 ] 00:16:32.538 }' 00:16:32.538 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.538 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.538 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.538 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.538 17:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.476 "name": "raid_bdev1", 00:16:33.476 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:33.476 "strip_size_kb": 64, 00:16:33.476 "state": "online", 00:16:33.476 "raid_level": "raid5f", 00:16:33.476 "superblock": false, 00:16:33.476 "num_base_bdevs": 4, 00:16:33.476 "num_base_bdevs_discovered": 4, 00:16:33.476 "num_base_bdevs_operational": 4, 00:16:33.476 "process": { 00:16:33.476 "type": "rebuild", 00:16:33.476 "target": "spare", 00:16:33.476 "progress": { 00:16:33.476 "blocks": 65280, 00:16:33.476 "percent": 33 00:16:33.476 } 00:16:33.476 }, 00:16:33.476 "base_bdevs_list": [ 00:16:33.476 { 00:16:33.476 "name": "spare", 00:16:33.476 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:33.476 "is_configured": true, 00:16:33.476 "data_offset": 0, 00:16:33.476 "data_size": 65536 00:16:33.476 }, 00:16:33.476 { 00:16:33.476 "name": "BaseBdev2", 00:16:33.476 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:33.476 "is_configured": true, 00:16:33.476 "data_offset": 0, 00:16:33.476 "data_size": 65536 00:16:33.476 }, 00:16:33.476 { 00:16:33.476 "name": "BaseBdev3", 00:16:33.476 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:33.476 "is_configured": true, 00:16:33.476 "data_offset": 0, 00:16:33.476 "data_size": 65536 00:16:33.476 }, 00:16:33.476 { 00:16:33.476 "name": "BaseBdev4", 00:16:33.476 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:33.476 "is_configured": true, 00:16:33.476 "data_offset": 0, 00:16:33.476 "data_size": 65536 00:16:33.476 } 00:16:33.476 ] 00:16:33.476 }' 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.476 17:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.856 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.856 "name": "raid_bdev1", 00:16:34.856 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:34.856 "strip_size_kb": 64, 00:16:34.856 "state": "online", 00:16:34.856 "raid_level": "raid5f", 00:16:34.856 "superblock": false, 00:16:34.856 "num_base_bdevs": 4, 00:16:34.856 "num_base_bdevs_discovered": 4, 00:16:34.856 "num_base_bdevs_operational": 4, 00:16:34.856 "process": { 00:16:34.856 "type": "rebuild", 00:16:34.856 "target": "spare", 00:16:34.856 "progress": { 00:16:34.856 "blocks": 86400, 00:16:34.856 "percent": 43 00:16:34.856 } 00:16:34.856 }, 00:16:34.856 "base_bdevs_list": [ 00:16:34.856 { 00:16:34.856 "name": "spare", 00:16:34.856 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:34.856 "is_configured": true, 00:16:34.856 "data_offset": 0, 00:16:34.856 "data_size": 65536 00:16:34.856 }, 00:16:34.856 { 00:16:34.856 "name": "BaseBdev2", 00:16:34.856 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:34.856 "is_configured": true, 00:16:34.857 "data_offset": 0, 00:16:34.857 "data_size": 65536 00:16:34.857 }, 00:16:34.857 { 00:16:34.857 "name": "BaseBdev3", 00:16:34.857 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:34.857 "is_configured": true, 00:16:34.857 "data_offset": 0, 00:16:34.857 "data_size": 65536 00:16:34.857 }, 00:16:34.857 { 00:16:34.857 "name": "BaseBdev4", 00:16:34.857 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:34.857 "is_configured": true, 00:16:34.857 "data_offset": 0, 00:16:34.857 "data_size": 65536 00:16:34.857 } 00:16:34.857 ] 00:16:34.857 }' 00:16:34.857 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.857 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.857 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.857 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.857 17:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.794 17:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.794 17:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.794 17:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.794 17:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.794 17:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.794 17:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.794 17:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.794 17:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.794 17:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.794 17:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.794 17:33:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.794 17:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.794 "name": "raid_bdev1", 00:16:35.794 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:35.794 "strip_size_kb": 64, 00:16:35.794 "state": "online", 00:16:35.794 "raid_level": "raid5f", 00:16:35.794 "superblock": false, 00:16:35.794 "num_base_bdevs": 4, 00:16:35.794 "num_base_bdevs_discovered": 4, 00:16:35.794 "num_base_bdevs_operational": 4, 00:16:35.794 "process": { 00:16:35.794 "type": "rebuild", 00:16:35.794 "target": "spare", 00:16:35.794 "progress": { 00:16:35.794 "blocks": 109440, 00:16:35.794 "percent": 55 00:16:35.794 } 00:16:35.794 }, 00:16:35.794 "base_bdevs_list": [ 00:16:35.794 { 00:16:35.794 "name": "spare", 00:16:35.794 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:35.794 "is_configured": true, 00:16:35.794 "data_offset": 0, 00:16:35.794 "data_size": 65536 00:16:35.794 }, 00:16:35.794 { 00:16:35.794 "name": "BaseBdev2", 00:16:35.794 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:35.794 "is_configured": true, 00:16:35.794 "data_offset": 0, 00:16:35.794 "data_size": 65536 00:16:35.794 }, 00:16:35.794 { 00:16:35.794 "name": "BaseBdev3", 00:16:35.794 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:35.794 "is_configured": true, 00:16:35.794 "data_offset": 0, 00:16:35.794 "data_size": 65536 00:16:35.794 }, 00:16:35.794 { 00:16:35.794 "name": "BaseBdev4", 00:16:35.794 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:35.794 "is_configured": true, 00:16:35.794 "data_offset": 0, 00:16:35.794 "data_size": 65536 00:16:35.794 } 00:16:35.794 ] 00:16:35.794 }' 00:16:35.794 17:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.794 17:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.794 17:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.794 17:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.794 17:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.173 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.174 "name": "raid_bdev1", 00:16:37.174 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:37.174 "strip_size_kb": 64, 00:16:37.174 "state": "online", 00:16:37.174 "raid_level": "raid5f", 00:16:37.174 "superblock": false, 00:16:37.174 "num_base_bdevs": 4, 00:16:37.174 "num_base_bdevs_discovered": 4, 00:16:37.174 "num_base_bdevs_operational": 4, 00:16:37.174 "process": { 00:16:37.174 "type": "rebuild", 00:16:37.174 "target": "spare", 00:16:37.174 "progress": { 00:16:37.174 "blocks": 130560, 00:16:37.174 "percent": 66 00:16:37.174 } 00:16:37.174 }, 00:16:37.174 "base_bdevs_list": [ 00:16:37.174 { 00:16:37.174 "name": "spare", 00:16:37.174 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:37.174 "is_configured": true, 00:16:37.174 "data_offset": 0, 00:16:37.174 "data_size": 65536 00:16:37.174 }, 00:16:37.174 { 00:16:37.174 "name": "BaseBdev2", 00:16:37.174 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:37.174 "is_configured": true, 00:16:37.174 "data_offset": 0, 00:16:37.174 "data_size": 65536 00:16:37.174 }, 00:16:37.174 { 00:16:37.174 "name": "BaseBdev3", 00:16:37.174 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:37.174 "is_configured": true, 00:16:37.174 "data_offset": 0, 00:16:37.174 "data_size": 65536 00:16:37.174 }, 00:16:37.174 { 00:16:37.174 "name": "BaseBdev4", 00:16:37.174 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:37.174 "is_configured": true, 00:16:37.174 "data_offset": 0, 00:16:37.174 "data_size": 65536 00:16:37.174 } 00:16:37.174 ] 00:16:37.174 }' 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.174 17:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.112 "name": "raid_bdev1", 00:16:38.112 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:38.112 "strip_size_kb": 64, 00:16:38.112 "state": "online", 00:16:38.112 "raid_level": "raid5f", 00:16:38.112 "superblock": false, 00:16:38.112 "num_base_bdevs": 4, 00:16:38.112 "num_base_bdevs_discovered": 4, 00:16:38.112 "num_base_bdevs_operational": 4, 00:16:38.112 "process": { 00:16:38.112 "type": "rebuild", 00:16:38.112 "target": "spare", 00:16:38.112 "progress": { 00:16:38.112 "blocks": 153600, 00:16:38.112 "percent": 78 00:16:38.112 } 00:16:38.112 }, 00:16:38.112 "base_bdevs_list": [ 00:16:38.112 { 00:16:38.112 "name": "spare", 00:16:38.112 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:38.112 "is_configured": true, 00:16:38.112 "data_offset": 0, 00:16:38.112 "data_size": 65536 00:16:38.112 }, 00:16:38.112 { 00:16:38.112 "name": "BaseBdev2", 00:16:38.112 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:38.112 "is_configured": true, 00:16:38.112 "data_offset": 0, 00:16:38.112 "data_size": 65536 00:16:38.112 }, 00:16:38.112 { 00:16:38.112 "name": "BaseBdev3", 00:16:38.112 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:38.112 "is_configured": true, 00:16:38.112 "data_offset": 0, 00:16:38.112 "data_size": 65536 00:16:38.112 }, 00:16:38.112 { 00:16:38.112 "name": "BaseBdev4", 00:16:38.112 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:38.112 "is_configured": true, 00:16:38.112 "data_offset": 0, 00:16:38.112 "data_size": 65536 00:16:38.112 } 00:16:38.112 ] 00:16:38.112 }' 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.112 17:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.050 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.051 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.051 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.051 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.051 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.051 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.051 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.051 17:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.051 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.051 17:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.051 17:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.310 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.310 "name": "raid_bdev1", 00:16:39.310 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:39.310 "strip_size_kb": 64, 00:16:39.310 "state": "online", 00:16:39.310 "raid_level": "raid5f", 00:16:39.310 "superblock": false, 00:16:39.310 "num_base_bdevs": 4, 00:16:39.310 "num_base_bdevs_discovered": 4, 00:16:39.310 "num_base_bdevs_operational": 4, 00:16:39.310 "process": { 00:16:39.310 "type": "rebuild", 00:16:39.310 "target": "spare", 00:16:39.310 "progress": { 00:16:39.310 "blocks": 174720, 00:16:39.310 "percent": 88 00:16:39.310 } 00:16:39.310 }, 00:16:39.310 "base_bdevs_list": [ 00:16:39.310 { 00:16:39.310 "name": "spare", 00:16:39.310 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:39.310 "is_configured": true, 00:16:39.310 "data_offset": 0, 00:16:39.310 "data_size": 65536 00:16:39.310 }, 00:16:39.310 { 00:16:39.310 "name": "BaseBdev2", 00:16:39.310 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:39.310 "is_configured": true, 00:16:39.310 "data_offset": 0, 00:16:39.310 "data_size": 65536 00:16:39.310 }, 00:16:39.310 { 00:16:39.310 "name": "BaseBdev3", 00:16:39.310 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:39.310 "is_configured": true, 00:16:39.310 "data_offset": 0, 00:16:39.310 "data_size": 65536 00:16:39.310 }, 00:16:39.311 { 00:16:39.311 "name": "BaseBdev4", 00:16:39.311 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:39.311 "is_configured": true, 00:16:39.311 "data_offset": 0, 00:16:39.311 "data_size": 65536 00:16:39.311 } 00:16:39.311 ] 00:16:39.311 }' 00:16:39.311 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.311 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.311 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.311 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.311 17:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.251 [2024-12-07 17:33:13.580696] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:40.251 [2024-12-07 17:33:13.580782] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:40.251 [2024-12-07 17:33:13.580821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.251 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.251 "name": "raid_bdev1", 00:16:40.251 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:40.251 "strip_size_kb": 64, 00:16:40.251 "state": "online", 00:16:40.251 "raid_level": "raid5f", 00:16:40.251 "superblock": false, 00:16:40.251 "num_base_bdevs": 4, 00:16:40.251 "num_base_bdevs_discovered": 4, 00:16:40.251 "num_base_bdevs_operational": 4, 00:16:40.251 "process": { 00:16:40.251 "type": "rebuild", 00:16:40.251 "target": "spare", 00:16:40.251 "progress": { 00:16:40.251 "blocks": 195840, 00:16:40.251 "percent": 99 00:16:40.251 } 00:16:40.251 }, 00:16:40.251 "base_bdevs_list": [ 00:16:40.251 { 00:16:40.251 "name": "spare", 00:16:40.251 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:40.251 "is_configured": true, 00:16:40.251 "data_offset": 0, 00:16:40.251 "data_size": 65536 00:16:40.251 }, 00:16:40.251 { 00:16:40.251 "name": "BaseBdev2", 00:16:40.252 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:40.252 "is_configured": true, 00:16:40.252 "data_offset": 0, 00:16:40.252 "data_size": 65536 00:16:40.252 }, 00:16:40.252 { 00:16:40.252 "name": "BaseBdev3", 00:16:40.252 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:40.252 "is_configured": true, 00:16:40.252 "data_offset": 0, 00:16:40.252 "data_size": 65536 00:16:40.252 }, 00:16:40.252 { 00:16:40.252 "name": "BaseBdev4", 00:16:40.252 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:40.252 "is_configured": true, 00:16:40.252 "data_offset": 0, 00:16:40.252 "data_size": 65536 00:16:40.252 } 00:16:40.252 ] 00:16:40.252 }' 00:16:40.252 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.512 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.512 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.512 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.512 17:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.452 "name": "raid_bdev1", 00:16:41.452 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:41.452 "strip_size_kb": 64, 00:16:41.452 "state": "online", 00:16:41.452 "raid_level": "raid5f", 00:16:41.452 "superblock": false, 00:16:41.452 "num_base_bdevs": 4, 00:16:41.452 "num_base_bdevs_discovered": 4, 00:16:41.452 "num_base_bdevs_operational": 4, 00:16:41.452 "base_bdevs_list": [ 00:16:41.452 { 00:16:41.452 "name": "spare", 00:16:41.452 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:41.452 "is_configured": true, 00:16:41.452 "data_offset": 0, 00:16:41.452 "data_size": 65536 00:16:41.452 }, 00:16:41.452 { 00:16:41.452 "name": "BaseBdev2", 00:16:41.452 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:41.452 "is_configured": true, 00:16:41.452 "data_offset": 0, 00:16:41.452 "data_size": 65536 00:16:41.452 }, 00:16:41.452 { 00:16:41.452 "name": "BaseBdev3", 00:16:41.452 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:41.452 "is_configured": true, 00:16:41.452 "data_offset": 0, 00:16:41.452 "data_size": 65536 00:16:41.452 }, 00:16:41.452 { 00:16:41.452 "name": "BaseBdev4", 00:16:41.452 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:41.452 "is_configured": true, 00:16:41.452 "data_offset": 0, 00:16:41.452 "data_size": 65536 00:16:41.452 } 00:16:41.452 ] 00:16:41.452 }' 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:41.452 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.714 "name": "raid_bdev1", 00:16:41.714 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:41.714 "strip_size_kb": 64, 00:16:41.714 "state": "online", 00:16:41.714 "raid_level": "raid5f", 00:16:41.714 "superblock": false, 00:16:41.714 "num_base_bdevs": 4, 00:16:41.714 "num_base_bdevs_discovered": 4, 00:16:41.714 "num_base_bdevs_operational": 4, 00:16:41.714 "base_bdevs_list": [ 00:16:41.714 { 00:16:41.714 "name": "spare", 00:16:41.714 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:41.714 "is_configured": true, 00:16:41.714 "data_offset": 0, 00:16:41.714 "data_size": 65536 00:16:41.714 }, 00:16:41.714 { 00:16:41.714 "name": "BaseBdev2", 00:16:41.714 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:41.714 "is_configured": true, 00:16:41.714 "data_offset": 0, 00:16:41.714 "data_size": 65536 00:16:41.714 }, 00:16:41.714 { 00:16:41.714 "name": "BaseBdev3", 00:16:41.714 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:41.714 "is_configured": true, 00:16:41.714 "data_offset": 0, 00:16:41.714 "data_size": 65536 00:16:41.714 }, 00:16:41.714 { 00:16:41.714 "name": "BaseBdev4", 00:16:41.714 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:41.714 "is_configured": true, 00:16:41.714 "data_offset": 0, 00:16:41.714 "data_size": 65536 00:16:41.714 } 00:16:41.714 ] 00:16:41.714 }' 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.714 17:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.714 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.714 "name": "raid_bdev1", 00:16:41.714 "uuid": "7955e836-f2ec-4c57-b35a-a8540e3e2412", 00:16:41.714 "strip_size_kb": 64, 00:16:41.714 "state": "online", 00:16:41.714 "raid_level": "raid5f", 00:16:41.714 "superblock": false, 00:16:41.714 "num_base_bdevs": 4, 00:16:41.714 "num_base_bdevs_discovered": 4, 00:16:41.714 "num_base_bdevs_operational": 4, 00:16:41.714 "base_bdevs_list": [ 00:16:41.714 { 00:16:41.714 "name": "spare", 00:16:41.714 "uuid": "e7d87b27-b127-5cd3-870e-9c022290e84b", 00:16:41.714 "is_configured": true, 00:16:41.714 "data_offset": 0, 00:16:41.714 "data_size": 65536 00:16:41.714 }, 00:16:41.714 { 00:16:41.714 "name": "BaseBdev2", 00:16:41.714 "uuid": "a9ef830e-36f7-5731-b6bd-55c3b7c8fe2c", 00:16:41.714 "is_configured": true, 00:16:41.714 "data_offset": 0, 00:16:41.714 "data_size": 65536 00:16:41.714 }, 00:16:41.714 { 00:16:41.714 "name": "BaseBdev3", 00:16:41.714 "uuid": "79b7bc82-1eb0-520b-afa9-2940d332f49d", 00:16:41.714 "is_configured": true, 00:16:41.714 "data_offset": 0, 00:16:41.714 "data_size": 65536 00:16:41.714 }, 00:16:41.714 { 00:16:41.714 "name": "BaseBdev4", 00:16:41.714 "uuid": "f54fa16f-92e1-5dfa-983d-787376479904", 00:16:41.714 "is_configured": true, 00:16:41.714 "data_offset": 0, 00:16:41.714 "data_size": 65536 00:16:41.714 } 00:16:41.714 ] 00:16:41.714 }' 00:16:41.714 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.714 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.317 [2024-12-07 17:33:15.446544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.317 [2024-12-07 17:33:15.446580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.317 [2024-12-07 17:33:15.446669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.317 [2024-12-07 17:33:15.446790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.317 [2024-12-07 17:33:15.446806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.317 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:42.577 /dev/nbd0 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:42.577 1+0 records in 00:16:42.577 1+0 records out 00:16:42.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459391 s, 8.9 MB/s 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.577 17:33:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:42.836 /dev/nbd1 00:16:42.836 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:42.836 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:42.837 1+0 records in 00:16:42.837 1+0 records out 00:16:42.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337003 s, 12.2 MB/s 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.837 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:43.096 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:43.096 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:43.096 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:43.096 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.096 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.096 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:43.096 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:43.096 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.096 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.096 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84588 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84588 ']' 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84588 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84588 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.356 killing process with pid 84588 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84588' 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84588 00:16:43.356 Received shutdown signal, test time was about 60.000000 seconds 00:16:43.356 00:16:43.356 Latency(us) 00:16:43.356 [2024-12-07T17:33:16.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.356 [2024-12-07T17:33:16.738Z] =================================================================================================================== 00:16:43.356 [2024-12-07T17:33:16.738Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:43.356 [2024-12-07 17:33:16.645013] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.356 17:33:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84588 00:16:43.926 [2024-12-07 17:33:17.118530] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.865 17:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:44.865 00:16:44.865 real 0m19.999s 00:16:44.865 user 0m23.936s 00:16:44.865 sys 0m2.224s 00:16:44.865 17:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.865 17:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.865 ************************************ 00:16:44.865 END TEST raid5f_rebuild_test 00:16:44.865 ************************************ 00:16:44.865 17:33:18 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:44.865 17:33:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:44.865 17:33:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.865 17:33:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.865 ************************************ 00:16:44.865 START TEST raid5f_rebuild_test_sb 00:16:44.865 ************************************ 00:16:44.865 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85104 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85104 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85104 ']' 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.125 17:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.125 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:45.125 Zero copy mechanism will not be used. 00:16:45.125 [2024-12-07 17:33:18.341416] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:45.125 [2024-12-07 17:33:18.341544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85104 ] 00:16:45.386 [2024-12-07 17:33:18.511698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.386 [2024-12-07 17:33:18.618275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.645 [2024-12-07 17:33:18.808084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.646 [2024-12-07 17:33:18.808140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.906 BaseBdev1_malloc 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.906 [2024-12-07 17:33:19.265338] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:45.906 [2024-12-07 17:33:19.265397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.906 [2024-12-07 17:33:19.265419] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:45.906 [2024-12-07 17:33:19.265430] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.906 [2024-12-07 17:33:19.267529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.906 [2024-12-07 17:33:19.267570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.906 BaseBdev1 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.906 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 BaseBdev2_malloc 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 [2024-12-07 17:33:19.321465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:46.167 [2024-12-07 17:33:19.321522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.167 [2024-12-07 17:33:19.321543] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:46.167 [2024-12-07 17:33:19.321554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.167 [2024-12-07 17:33:19.323639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.167 [2024-12-07 17:33:19.323678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:46.167 BaseBdev2 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 BaseBdev3_malloc 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 [2024-12-07 17:33:19.407483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:46.167 [2024-12-07 17:33:19.407537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.167 [2024-12-07 17:33:19.407558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:46.167 [2024-12-07 17:33:19.407569] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.167 [2024-12-07 17:33:19.409559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.167 [2024-12-07 17:33:19.409603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:46.167 BaseBdev3 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 BaseBdev4_malloc 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 [2024-12-07 17:33:19.461986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:46.167 [2024-12-07 17:33:19.462043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.167 [2024-12-07 17:33:19.462062] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:46.167 [2024-12-07 17:33:19.462073] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.167 [2024-12-07 17:33:19.464287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.167 [2024-12-07 17:33:19.464330] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:46.167 BaseBdev4 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 spare_malloc 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 spare_delay 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 [2024-12-07 17:33:19.526803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:46.167 [2024-12-07 17:33:19.526855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.167 [2024-12-07 17:33:19.526870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:46.167 [2024-12-07 17:33:19.526881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.167 [2024-12-07 17:33:19.528884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.167 [2024-12-07 17:33:19.528925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:46.167 spare 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.167 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.167 [2024-12-07 17:33:19.538838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.167 [2024-12-07 17:33:19.540648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.167 [2024-12-07 17:33:19.540725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.167 [2024-12-07 17:33:19.540773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.168 [2024-12-07 17:33:19.541021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:46.168 [2024-12-07 17:33:19.541043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:46.168 [2024-12-07 17:33:19.541302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:46.427 [2024-12-07 17:33:19.548392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:46.427 [2024-12-07 17:33:19.548418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:46.427 [2024-12-07 17:33:19.548595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.427 "name": "raid_bdev1", 00:16:46.427 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:46.427 "strip_size_kb": 64, 00:16:46.427 "state": "online", 00:16:46.427 "raid_level": "raid5f", 00:16:46.427 "superblock": true, 00:16:46.427 "num_base_bdevs": 4, 00:16:46.427 "num_base_bdevs_discovered": 4, 00:16:46.427 "num_base_bdevs_operational": 4, 00:16:46.427 "base_bdevs_list": [ 00:16:46.427 { 00:16:46.427 "name": "BaseBdev1", 00:16:46.427 "uuid": "dc2ef3ae-6be3-50ab-aa36-c88321de04f7", 00:16:46.427 "is_configured": true, 00:16:46.427 "data_offset": 2048, 00:16:46.427 "data_size": 63488 00:16:46.427 }, 00:16:46.427 { 00:16:46.427 "name": "BaseBdev2", 00:16:46.427 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:46.427 "is_configured": true, 00:16:46.427 "data_offset": 2048, 00:16:46.427 "data_size": 63488 00:16:46.427 }, 00:16:46.427 { 00:16:46.427 "name": "BaseBdev3", 00:16:46.427 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:46.427 "is_configured": true, 00:16:46.427 "data_offset": 2048, 00:16:46.427 "data_size": 63488 00:16:46.427 }, 00:16:46.427 { 00:16:46.427 "name": "BaseBdev4", 00:16:46.427 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:46.427 "is_configured": true, 00:16:46.427 "data_offset": 2048, 00:16:46.427 "data_size": 63488 00:16:46.427 } 00:16:46.427 ] 00:16:46.427 }' 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.427 17:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.696 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.696 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:46.696 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.696 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.696 [2024-12-07 17:33:20.032403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.696 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.696 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:46.696 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:46.696 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.697 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.697 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:46.958 [2024-12-07 17:33:20.271844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:46.958 /dev/nbd0 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.958 1+0 records in 00:16:46.958 1+0 records out 00:16:46.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407254 s, 10.1 MB/s 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:46.958 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.218 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:47.218 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:47.218 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.218 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:47.218 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:47.218 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:47.218 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:47.218 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:47.477 496+0 records in 00:16:47.477 496+0 records out 00:16:47.477 97517568 bytes (98 MB, 93 MiB) copied, 0.448833 s, 217 MB/s 00:16:47.477 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:47.477 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:47.477 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:47.477 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:47.477 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:47.477 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.477 17:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:47.738 [2024-12-07 17:33:21.014158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.738 [2024-12-07 17:33:21.028098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.738 "name": "raid_bdev1", 00:16:47.738 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:47.738 "strip_size_kb": 64, 00:16:47.738 "state": "online", 00:16:47.738 "raid_level": "raid5f", 00:16:47.738 "superblock": true, 00:16:47.738 "num_base_bdevs": 4, 00:16:47.738 "num_base_bdevs_discovered": 3, 00:16:47.738 "num_base_bdevs_operational": 3, 00:16:47.738 "base_bdevs_list": [ 00:16:47.738 { 00:16:47.738 "name": null, 00:16:47.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.738 "is_configured": false, 00:16:47.738 "data_offset": 0, 00:16:47.738 "data_size": 63488 00:16:47.738 }, 00:16:47.738 { 00:16:47.738 "name": "BaseBdev2", 00:16:47.738 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:47.738 "is_configured": true, 00:16:47.738 "data_offset": 2048, 00:16:47.738 "data_size": 63488 00:16:47.738 }, 00:16:47.738 { 00:16:47.738 "name": "BaseBdev3", 00:16:47.738 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:47.738 "is_configured": true, 00:16:47.738 "data_offset": 2048, 00:16:47.738 "data_size": 63488 00:16:47.738 }, 00:16:47.738 { 00:16:47.738 "name": "BaseBdev4", 00:16:47.738 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:47.738 "is_configured": true, 00:16:47.738 "data_offset": 2048, 00:16:47.738 "data_size": 63488 00:16:47.738 } 00:16:47.738 ] 00:16:47.738 }' 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.738 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.307 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.307 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.307 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.307 [2024-12-07 17:33:21.463349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.307 [2024-12-07 17:33:21.479450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:48.307 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.307 17:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:48.307 [2024-12-07 17:33:21.488996] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.245 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.246 "name": "raid_bdev1", 00:16:49.246 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:49.246 "strip_size_kb": 64, 00:16:49.246 "state": "online", 00:16:49.246 "raid_level": "raid5f", 00:16:49.246 "superblock": true, 00:16:49.246 "num_base_bdevs": 4, 00:16:49.246 "num_base_bdevs_discovered": 4, 00:16:49.246 "num_base_bdevs_operational": 4, 00:16:49.246 "process": { 00:16:49.246 "type": "rebuild", 00:16:49.246 "target": "spare", 00:16:49.246 "progress": { 00:16:49.246 "blocks": 19200, 00:16:49.246 "percent": 10 00:16:49.246 } 00:16:49.246 }, 00:16:49.246 "base_bdevs_list": [ 00:16:49.246 { 00:16:49.246 "name": "spare", 00:16:49.246 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:16:49.246 "is_configured": true, 00:16:49.246 "data_offset": 2048, 00:16:49.246 "data_size": 63488 00:16:49.246 }, 00:16:49.246 { 00:16:49.246 "name": "BaseBdev2", 00:16:49.246 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:49.246 "is_configured": true, 00:16:49.246 "data_offset": 2048, 00:16:49.246 "data_size": 63488 00:16:49.246 }, 00:16:49.246 { 00:16:49.246 "name": "BaseBdev3", 00:16:49.246 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:49.246 "is_configured": true, 00:16:49.246 "data_offset": 2048, 00:16:49.246 "data_size": 63488 00:16:49.246 }, 00:16:49.246 { 00:16:49.246 "name": "BaseBdev4", 00:16:49.246 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:49.246 "is_configured": true, 00:16:49.246 "data_offset": 2048, 00:16:49.246 "data_size": 63488 00:16:49.246 } 00:16:49.246 ] 00:16:49.246 }' 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.246 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.246 [2024-12-07 17:33:22.619799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.505 [2024-12-07 17:33:22.695478] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:49.505 [2024-12-07 17:33:22.695566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.505 [2024-12-07 17:33:22.695584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.505 [2024-12-07 17:33:22.695594] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.505 "name": "raid_bdev1", 00:16:49.505 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:49.505 "strip_size_kb": 64, 00:16:49.505 "state": "online", 00:16:49.505 "raid_level": "raid5f", 00:16:49.505 "superblock": true, 00:16:49.505 "num_base_bdevs": 4, 00:16:49.505 "num_base_bdevs_discovered": 3, 00:16:49.505 "num_base_bdevs_operational": 3, 00:16:49.505 "base_bdevs_list": [ 00:16:49.505 { 00:16:49.505 "name": null, 00:16:49.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.505 "is_configured": false, 00:16:49.505 "data_offset": 0, 00:16:49.505 "data_size": 63488 00:16:49.505 }, 00:16:49.505 { 00:16:49.505 "name": "BaseBdev2", 00:16:49.505 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:49.505 "is_configured": true, 00:16:49.505 "data_offset": 2048, 00:16:49.505 "data_size": 63488 00:16:49.505 }, 00:16:49.505 { 00:16:49.505 "name": "BaseBdev3", 00:16:49.505 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:49.505 "is_configured": true, 00:16:49.505 "data_offset": 2048, 00:16:49.505 "data_size": 63488 00:16:49.505 }, 00:16:49.505 { 00:16:49.505 "name": "BaseBdev4", 00:16:49.505 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:49.505 "is_configured": true, 00:16:49.505 "data_offset": 2048, 00:16:49.505 "data_size": 63488 00:16:49.505 } 00:16:49.505 ] 00:16:49.505 }' 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.505 17:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.074 "name": "raid_bdev1", 00:16:50.074 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:50.074 "strip_size_kb": 64, 00:16:50.074 "state": "online", 00:16:50.074 "raid_level": "raid5f", 00:16:50.074 "superblock": true, 00:16:50.074 "num_base_bdevs": 4, 00:16:50.074 "num_base_bdevs_discovered": 3, 00:16:50.074 "num_base_bdevs_operational": 3, 00:16:50.074 "base_bdevs_list": [ 00:16:50.074 { 00:16:50.074 "name": null, 00:16:50.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.074 "is_configured": false, 00:16:50.074 "data_offset": 0, 00:16:50.074 "data_size": 63488 00:16:50.074 }, 00:16:50.074 { 00:16:50.074 "name": "BaseBdev2", 00:16:50.074 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:50.074 "is_configured": true, 00:16:50.074 "data_offset": 2048, 00:16:50.074 "data_size": 63488 00:16:50.074 }, 00:16:50.074 { 00:16:50.074 "name": "BaseBdev3", 00:16:50.074 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:50.074 "is_configured": true, 00:16:50.074 "data_offset": 2048, 00:16:50.074 "data_size": 63488 00:16:50.074 }, 00:16:50.074 { 00:16:50.074 "name": "BaseBdev4", 00:16:50.074 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:50.074 "is_configured": true, 00:16:50.074 "data_offset": 2048, 00:16:50.074 "data_size": 63488 00:16:50.074 } 00:16:50.074 ] 00:16:50.074 }' 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.074 [2024-12-07 17:33:23.325354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.074 [2024-12-07 17:33:23.341594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.074 17:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:50.074 [2024-12-07 17:33:23.351091] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:51.014 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.014 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.014 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.014 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.014 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.014 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.014 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.014 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.014 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.014 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.274 "name": "raid_bdev1", 00:16:51.274 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:51.274 "strip_size_kb": 64, 00:16:51.274 "state": "online", 00:16:51.274 "raid_level": "raid5f", 00:16:51.274 "superblock": true, 00:16:51.274 "num_base_bdevs": 4, 00:16:51.274 "num_base_bdevs_discovered": 4, 00:16:51.274 "num_base_bdevs_operational": 4, 00:16:51.274 "process": { 00:16:51.274 "type": "rebuild", 00:16:51.274 "target": "spare", 00:16:51.274 "progress": { 00:16:51.274 "blocks": 19200, 00:16:51.274 "percent": 10 00:16:51.274 } 00:16:51.274 }, 00:16:51.274 "base_bdevs_list": [ 00:16:51.274 { 00:16:51.274 "name": "spare", 00:16:51.274 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:16:51.274 "is_configured": true, 00:16:51.274 "data_offset": 2048, 00:16:51.274 "data_size": 63488 00:16:51.274 }, 00:16:51.274 { 00:16:51.274 "name": "BaseBdev2", 00:16:51.274 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:51.274 "is_configured": true, 00:16:51.274 "data_offset": 2048, 00:16:51.274 "data_size": 63488 00:16:51.274 }, 00:16:51.274 { 00:16:51.274 "name": "BaseBdev3", 00:16:51.274 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:51.274 "is_configured": true, 00:16:51.274 "data_offset": 2048, 00:16:51.274 "data_size": 63488 00:16:51.274 }, 00:16:51.274 { 00:16:51.274 "name": "BaseBdev4", 00:16:51.274 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:51.274 "is_configured": true, 00:16:51.274 "data_offset": 2048, 00:16:51.274 "data_size": 63488 00:16:51.274 } 00:16:51.274 ] 00:16:51.274 }' 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:51.274 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=635 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.274 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.275 "name": "raid_bdev1", 00:16:51.275 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:51.275 "strip_size_kb": 64, 00:16:51.275 "state": "online", 00:16:51.275 "raid_level": "raid5f", 00:16:51.275 "superblock": true, 00:16:51.275 "num_base_bdevs": 4, 00:16:51.275 "num_base_bdevs_discovered": 4, 00:16:51.275 "num_base_bdevs_operational": 4, 00:16:51.275 "process": { 00:16:51.275 "type": "rebuild", 00:16:51.275 "target": "spare", 00:16:51.275 "progress": { 00:16:51.275 "blocks": 21120, 00:16:51.275 "percent": 11 00:16:51.275 } 00:16:51.275 }, 00:16:51.275 "base_bdevs_list": [ 00:16:51.275 { 00:16:51.275 "name": "spare", 00:16:51.275 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:16:51.275 "is_configured": true, 00:16:51.275 "data_offset": 2048, 00:16:51.275 "data_size": 63488 00:16:51.275 }, 00:16:51.275 { 00:16:51.275 "name": "BaseBdev2", 00:16:51.275 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:51.275 "is_configured": true, 00:16:51.275 "data_offset": 2048, 00:16:51.275 "data_size": 63488 00:16:51.275 }, 00:16:51.275 { 00:16:51.275 "name": "BaseBdev3", 00:16:51.275 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:51.275 "is_configured": true, 00:16:51.275 "data_offset": 2048, 00:16:51.275 "data_size": 63488 00:16:51.275 }, 00:16:51.275 { 00:16:51.275 "name": "BaseBdev4", 00:16:51.275 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:51.275 "is_configured": true, 00:16:51.275 "data_offset": 2048, 00:16:51.275 "data_size": 63488 00:16:51.275 } 00:16:51.275 ] 00:16:51.275 }' 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.275 17:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.658 "name": "raid_bdev1", 00:16:52.658 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:52.658 "strip_size_kb": 64, 00:16:52.658 "state": "online", 00:16:52.658 "raid_level": "raid5f", 00:16:52.658 "superblock": true, 00:16:52.658 "num_base_bdevs": 4, 00:16:52.658 "num_base_bdevs_discovered": 4, 00:16:52.658 "num_base_bdevs_operational": 4, 00:16:52.658 "process": { 00:16:52.658 "type": "rebuild", 00:16:52.658 "target": "spare", 00:16:52.658 "progress": { 00:16:52.658 "blocks": 42240, 00:16:52.658 "percent": 22 00:16:52.658 } 00:16:52.658 }, 00:16:52.658 "base_bdevs_list": [ 00:16:52.658 { 00:16:52.658 "name": "spare", 00:16:52.658 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:16:52.658 "is_configured": true, 00:16:52.658 "data_offset": 2048, 00:16:52.658 "data_size": 63488 00:16:52.658 }, 00:16:52.658 { 00:16:52.658 "name": "BaseBdev2", 00:16:52.658 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:52.658 "is_configured": true, 00:16:52.658 "data_offset": 2048, 00:16:52.658 "data_size": 63488 00:16:52.658 }, 00:16:52.658 { 00:16:52.658 "name": "BaseBdev3", 00:16:52.658 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:52.658 "is_configured": true, 00:16:52.658 "data_offset": 2048, 00:16:52.658 "data_size": 63488 00:16:52.658 }, 00:16:52.658 { 00:16:52.658 "name": "BaseBdev4", 00:16:52.658 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:52.658 "is_configured": true, 00:16:52.658 "data_offset": 2048, 00:16:52.658 "data_size": 63488 00:16:52.658 } 00:16:52.658 ] 00:16:52.658 }' 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.658 17:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.596 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.596 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.596 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.596 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.597 "name": "raid_bdev1", 00:16:53.597 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:53.597 "strip_size_kb": 64, 00:16:53.597 "state": "online", 00:16:53.597 "raid_level": "raid5f", 00:16:53.597 "superblock": true, 00:16:53.597 "num_base_bdevs": 4, 00:16:53.597 "num_base_bdevs_discovered": 4, 00:16:53.597 "num_base_bdevs_operational": 4, 00:16:53.597 "process": { 00:16:53.597 "type": "rebuild", 00:16:53.597 "target": "spare", 00:16:53.597 "progress": { 00:16:53.597 "blocks": 65280, 00:16:53.597 "percent": 34 00:16:53.597 } 00:16:53.597 }, 00:16:53.597 "base_bdevs_list": [ 00:16:53.597 { 00:16:53.597 "name": "spare", 00:16:53.597 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:16:53.597 "is_configured": true, 00:16:53.597 "data_offset": 2048, 00:16:53.597 "data_size": 63488 00:16:53.597 }, 00:16:53.597 { 00:16:53.597 "name": "BaseBdev2", 00:16:53.597 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:53.597 "is_configured": true, 00:16:53.597 "data_offset": 2048, 00:16:53.597 "data_size": 63488 00:16:53.597 }, 00:16:53.597 { 00:16:53.597 "name": "BaseBdev3", 00:16:53.597 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:53.597 "is_configured": true, 00:16:53.597 "data_offset": 2048, 00:16:53.597 "data_size": 63488 00:16:53.597 }, 00:16:53.597 { 00:16:53.597 "name": "BaseBdev4", 00:16:53.597 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:53.597 "is_configured": true, 00:16:53.597 "data_offset": 2048, 00:16:53.597 "data_size": 63488 00:16:53.597 } 00:16:53.597 ] 00:16:53.597 }' 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.597 17:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.543 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.543 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.543 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.543 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.543 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.543 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.543 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.543 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.543 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.543 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.802 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.802 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.802 "name": "raid_bdev1", 00:16:54.802 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:54.802 "strip_size_kb": 64, 00:16:54.802 "state": "online", 00:16:54.802 "raid_level": "raid5f", 00:16:54.802 "superblock": true, 00:16:54.802 "num_base_bdevs": 4, 00:16:54.802 "num_base_bdevs_discovered": 4, 00:16:54.802 "num_base_bdevs_operational": 4, 00:16:54.802 "process": { 00:16:54.802 "type": "rebuild", 00:16:54.802 "target": "spare", 00:16:54.802 "progress": { 00:16:54.802 "blocks": 86400, 00:16:54.802 "percent": 45 00:16:54.802 } 00:16:54.802 }, 00:16:54.802 "base_bdevs_list": [ 00:16:54.802 { 00:16:54.802 "name": "spare", 00:16:54.802 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:16:54.802 "is_configured": true, 00:16:54.802 "data_offset": 2048, 00:16:54.802 "data_size": 63488 00:16:54.802 }, 00:16:54.802 { 00:16:54.802 "name": "BaseBdev2", 00:16:54.802 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:54.802 "is_configured": true, 00:16:54.802 "data_offset": 2048, 00:16:54.802 "data_size": 63488 00:16:54.802 }, 00:16:54.802 { 00:16:54.803 "name": "BaseBdev3", 00:16:54.803 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:54.803 "is_configured": true, 00:16:54.803 "data_offset": 2048, 00:16:54.803 "data_size": 63488 00:16:54.803 }, 00:16:54.803 { 00:16:54.803 "name": "BaseBdev4", 00:16:54.803 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:54.803 "is_configured": true, 00:16:54.803 "data_offset": 2048, 00:16:54.803 "data_size": 63488 00:16:54.803 } 00:16:54.803 ] 00:16:54.803 }' 00:16:54.803 17:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.803 17:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.803 17:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.803 17:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.803 17:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.740 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.740 "name": "raid_bdev1", 00:16:55.740 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:55.740 "strip_size_kb": 64, 00:16:55.740 "state": "online", 00:16:55.740 "raid_level": "raid5f", 00:16:55.740 "superblock": true, 00:16:55.740 "num_base_bdevs": 4, 00:16:55.740 "num_base_bdevs_discovered": 4, 00:16:55.740 "num_base_bdevs_operational": 4, 00:16:55.740 "process": { 00:16:55.740 "type": "rebuild", 00:16:55.740 "target": "spare", 00:16:55.740 "progress": { 00:16:55.740 "blocks": 107520, 00:16:55.740 "percent": 56 00:16:55.740 } 00:16:55.740 }, 00:16:55.740 "base_bdevs_list": [ 00:16:55.740 { 00:16:55.740 "name": "spare", 00:16:55.740 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:16:55.740 "is_configured": true, 00:16:55.740 "data_offset": 2048, 00:16:55.740 "data_size": 63488 00:16:55.740 }, 00:16:55.740 { 00:16:55.740 "name": "BaseBdev2", 00:16:55.740 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:55.740 "is_configured": true, 00:16:55.740 "data_offset": 2048, 00:16:55.740 "data_size": 63488 00:16:55.740 }, 00:16:55.740 { 00:16:55.740 "name": "BaseBdev3", 00:16:55.740 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:55.740 "is_configured": true, 00:16:55.740 "data_offset": 2048, 00:16:55.740 "data_size": 63488 00:16:55.740 }, 00:16:55.740 { 00:16:55.740 "name": "BaseBdev4", 00:16:55.740 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:55.740 "is_configured": true, 00:16:55.740 "data_offset": 2048, 00:16:55.740 "data_size": 63488 00:16:55.740 } 00:16:55.740 ] 00:16:55.740 }' 00:16:56.000 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.001 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.001 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.001 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.001 17:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.964 "name": "raid_bdev1", 00:16:56.964 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:56.964 "strip_size_kb": 64, 00:16:56.964 "state": "online", 00:16:56.964 "raid_level": "raid5f", 00:16:56.964 "superblock": true, 00:16:56.964 "num_base_bdevs": 4, 00:16:56.964 "num_base_bdevs_discovered": 4, 00:16:56.964 "num_base_bdevs_operational": 4, 00:16:56.964 "process": { 00:16:56.964 "type": "rebuild", 00:16:56.964 "target": "spare", 00:16:56.964 "progress": { 00:16:56.964 "blocks": 130560, 00:16:56.964 "percent": 68 00:16:56.964 } 00:16:56.964 }, 00:16:56.964 "base_bdevs_list": [ 00:16:56.964 { 00:16:56.964 "name": "spare", 00:16:56.964 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:16:56.964 "is_configured": true, 00:16:56.964 "data_offset": 2048, 00:16:56.964 "data_size": 63488 00:16:56.964 }, 00:16:56.964 { 00:16:56.964 "name": "BaseBdev2", 00:16:56.964 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:56.964 "is_configured": true, 00:16:56.964 "data_offset": 2048, 00:16:56.964 "data_size": 63488 00:16:56.964 }, 00:16:56.964 { 00:16:56.964 "name": "BaseBdev3", 00:16:56.964 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:56.964 "is_configured": true, 00:16:56.964 "data_offset": 2048, 00:16:56.964 "data_size": 63488 00:16:56.964 }, 00:16:56.964 { 00:16:56.964 "name": "BaseBdev4", 00:16:56.964 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:56.964 "is_configured": true, 00:16:56.964 "data_offset": 2048, 00:16:56.964 "data_size": 63488 00:16:56.964 } 00:16:56.964 ] 00:16:56.964 }' 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.964 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.223 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.223 17:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.160 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.160 "name": "raid_bdev1", 00:16:58.160 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:58.160 "strip_size_kb": 64, 00:16:58.160 "state": "online", 00:16:58.160 "raid_level": "raid5f", 00:16:58.160 "superblock": true, 00:16:58.160 "num_base_bdevs": 4, 00:16:58.160 "num_base_bdevs_discovered": 4, 00:16:58.160 "num_base_bdevs_operational": 4, 00:16:58.160 "process": { 00:16:58.160 "type": "rebuild", 00:16:58.160 "target": "spare", 00:16:58.160 "progress": { 00:16:58.160 "blocks": 151680, 00:16:58.160 "percent": 79 00:16:58.160 } 00:16:58.160 }, 00:16:58.160 "base_bdevs_list": [ 00:16:58.160 { 00:16:58.160 "name": "spare", 00:16:58.160 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:16:58.160 "is_configured": true, 00:16:58.160 "data_offset": 2048, 00:16:58.160 "data_size": 63488 00:16:58.160 }, 00:16:58.160 { 00:16:58.160 "name": "BaseBdev2", 00:16:58.160 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:58.160 "is_configured": true, 00:16:58.160 "data_offset": 2048, 00:16:58.160 "data_size": 63488 00:16:58.160 }, 00:16:58.160 { 00:16:58.160 "name": "BaseBdev3", 00:16:58.160 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:58.160 "is_configured": true, 00:16:58.160 "data_offset": 2048, 00:16:58.160 "data_size": 63488 00:16:58.160 }, 00:16:58.160 { 00:16:58.160 "name": "BaseBdev4", 00:16:58.160 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:58.160 "is_configured": true, 00:16:58.160 "data_offset": 2048, 00:16:58.160 "data_size": 63488 00:16:58.160 } 00:16:58.160 ] 00:16:58.160 }' 00:16:58.161 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.161 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.161 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.161 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.161 17:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.542 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.542 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.542 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.542 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.542 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.542 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.542 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.543 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.543 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.543 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.543 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.543 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.543 "name": "raid_bdev1", 00:16:59.543 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:16:59.543 "strip_size_kb": 64, 00:16:59.543 "state": "online", 00:16:59.543 "raid_level": "raid5f", 00:16:59.543 "superblock": true, 00:16:59.543 "num_base_bdevs": 4, 00:16:59.543 "num_base_bdevs_discovered": 4, 00:16:59.543 "num_base_bdevs_operational": 4, 00:16:59.543 "process": { 00:16:59.543 "type": "rebuild", 00:16:59.543 "target": "spare", 00:16:59.543 "progress": { 00:16:59.543 "blocks": 174720, 00:16:59.543 "percent": 91 00:16:59.543 } 00:16:59.543 }, 00:16:59.543 "base_bdevs_list": [ 00:16:59.543 { 00:16:59.543 "name": "spare", 00:16:59.543 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:16:59.543 "is_configured": true, 00:16:59.543 "data_offset": 2048, 00:16:59.543 "data_size": 63488 00:16:59.543 }, 00:16:59.543 { 00:16:59.543 "name": "BaseBdev2", 00:16:59.543 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:16:59.543 "is_configured": true, 00:16:59.543 "data_offset": 2048, 00:16:59.543 "data_size": 63488 00:16:59.543 }, 00:16:59.543 { 00:16:59.543 "name": "BaseBdev3", 00:16:59.543 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:16:59.543 "is_configured": true, 00:16:59.543 "data_offset": 2048, 00:16:59.543 "data_size": 63488 00:16:59.543 }, 00:16:59.543 { 00:16:59.543 "name": "BaseBdev4", 00:16:59.543 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:16:59.543 "is_configured": true, 00:16:59.543 "data_offset": 2048, 00:16:59.543 "data_size": 63488 00:16:59.543 } 00:16:59.543 ] 00:16:59.543 }' 00:16:59.543 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.543 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.543 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.543 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.543 17:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.114 [2024-12-07 17:33:33.404757] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:00.114 [2024-12-07 17:33:33.404919] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:00.114 [2024-12-07 17:33:33.405112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.375 "name": "raid_bdev1", 00:17:00.375 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:00.375 "strip_size_kb": 64, 00:17:00.375 "state": "online", 00:17:00.375 "raid_level": "raid5f", 00:17:00.375 "superblock": true, 00:17:00.375 "num_base_bdevs": 4, 00:17:00.375 "num_base_bdevs_discovered": 4, 00:17:00.375 "num_base_bdevs_operational": 4, 00:17:00.375 "base_bdevs_list": [ 00:17:00.375 { 00:17:00.375 "name": "spare", 00:17:00.375 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:17:00.375 "is_configured": true, 00:17:00.375 "data_offset": 2048, 00:17:00.375 "data_size": 63488 00:17:00.375 }, 00:17:00.375 { 00:17:00.375 "name": "BaseBdev2", 00:17:00.375 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:00.375 "is_configured": true, 00:17:00.375 "data_offset": 2048, 00:17:00.375 "data_size": 63488 00:17:00.375 }, 00:17:00.375 { 00:17:00.375 "name": "BaseBdev3", 00:17:00.375 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:00.375 "is_configured": true, 00:17:00.375 "data_offset": 2048, 00:17:00.375 "data_size": 63488 00:17:00.375 }, 00:17:00.375 { 00:17:00.375 "name": "BaseBdev4", 00:17:00.375 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:00.375 "is_configured": true, 00:17:00.375 "data_offset": 2048, 00:17:00.375 "data_size": 63488 00:17:00.375 } 00:17:00.375 ] 00:17:00.375 }' 00:17:00.375 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.636 "name": "raid_bdev1", 00:17:00.636 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:00.636 "strip_size_kb": 64, 00:17:00.636 "state": "online", 00:17:00.636 "raid_level": "raid5f", 00:17:00.636 "superblock": true, 00:17:00.636 "num_base_bdevs": 4, 00:17:00.636 "num_base_bdevs_discovered": 4, 00:17:00.636 "num_base_bdevs_operational": 4, 00:17:00.636 "base_bdevs_list": [ 00:17:00.636 { 00:17:00.636 "name": "spare", 00:17:00.636 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:17:00.636 "is_configured": true, 00:17:00.636 "data_offset": 2048, 00:17:00.636 "data_size": 63488 00:17:00.636 }, 00:17:00.636 { 00:17:00.636 "name": "BaseBdev2", 00:17:00.636 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:00.636 "is_configured": true, 00:17:00.636 "data_offset": 2048, 00:17:00.636 "data_size": 63488 00:17:00.636 }, 00:17:00.636 { 00:17:00.636 "name": "BaseBdev3", 00:17:00.636 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:00.636 "is_configured": true, 00:17:00.636 "data_offset": 2048, 00:17:00.636 "data_size": 63488 00:17:00.636 }, 00:17:00.636 { 00:17:00.636 "name": "BaseBdev4", 00:17:00.636 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:00.636 "is_configured": true, 00:17:00.636 "data_offset": 2048, 00:17:00.636 "data_size": 63488 00:17:00.636 } 00:17:00.636 ] 00:17:00.636 }' 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.636 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.637 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.637 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.637 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.637 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.637 17:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.897 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.897 "name": "raid_bdev1", 00:17:00.897 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:00.897 "strip_size_kb": 64, 00:17:00.897 "state": "online", 00:17:00.897 "raid_level": "raid5f", 00:17:00.897 "superblock": true, 00:17:00.897 "num_base_bdevs": 4, 00:17:00.897 "num_base_bdevs_discovered": 4, 00:17:00.897 "num_base_bdevs_operational": 4, 00:17:00.897 "base_bdevs_list": [ 00:17:00.897 { 00:17:00.897 "name": "spare", 00:17:00.897 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:17:00.897 "is_configured": true, 00:17:00.897 "data_offset": 2048, 00:17:00.897 "data_size": 63488 00:17:00.897 }, 00:17:00.897 { 00:17:00.897 "name": "BaseBdev2", 00:17:00.897 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:00.897 "is_configured": true, 00:17:00.897 "data_offset": 2048, 00:17:00.897 "data_size": 63488 00:17:00.897 }, 00:17:00.897 { 00:17:00.897 "name": "BaseBdev3", 00:17:00.897 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:00.897 "is_configured": true, 00:17:00.897 "data_offset": 2048, 00:17:00.897 "data_size": 63488 00:17:00.897 }, 00:17:00.897 { 00:17:00.897 "name": "BaseBdev4", 00:17:00.897 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:00.897 "is_configured": true, 00:17:00.897 "data_offset": 2048, 00:17:00.897 "data_size": 63488 00:17:00.897 } 00:17:00.897 ] 00:17:00.897 }' 00:17:00.897 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.897 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.157 [2024-12-07 17:33:34.436269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.157 [2024-12-07 17:33:34.436305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.157 [2024-12-07 17:33:34.436387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.157 [2024-12-07 17:33:34.436480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.157 [2024-12-07 17:33:34.436501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.157 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:01.417 /dev/nbd0 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.417 1+0 records in 00:17:01.417 1+0 records out 00:17:01.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268501 s, 15.3 MB/s 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.417 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:01.678 /dev/nbd1 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.678 1+0 records in 00:17:01.678 1+0 records out 00:17:01.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552375 s, 7.4 MB/s 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:01.678 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:01.679 17:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.679 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.679 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:01.943 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:01.943 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.943 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.943 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:01.943 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:01.943 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.943 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:02.204 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:02.204 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:02.204 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:02.204 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.204 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.204 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:02.204 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:02.204 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.204 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.204 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:02.463 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:02.463 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:02.463 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:02.463 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.463 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.463 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:02.463 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:02.463 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.463 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:02.463 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.464 [2024-12-07 17:33:35.624630] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:02.464 [2024-12-07 17:33:35.624692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.464 [2024-12-07 17:33:35.624715] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:02.464 [2024-12-07 17:33:35.624725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.464 [2024-12-07 17:33:35.627167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.464 [2024-12-07 17:33:35.627209] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:02.464 [2024-12-07 17:33:35.627312] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:02.464 [2024-12-07 17:33:35.627371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.464 [2024-12-07 17:33:35.627543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.464 [2024-12-07 17:33:35.627638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.464 [2024-12-07 17:33:35.627763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:02.464 spare 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.464 [2024-12-07 17:33:35.727674] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:02.464 [2024-12-07 17:33:35.727751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:02.464 [2024-12-07 17:33:35.728049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:02.464 [2024-12-07 17:33:35.735302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:02.464 [2024-12-07 17:33:35.735361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:02.464 [2024-12-07 17:33:35.735616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.464 "name": "raid_bdev1", 00:17:02.464 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:02.464 "strip_size_kb": 64, 00:17:02.464 "state": "online", 00:17:02.464 "raid_level": "raid5f", 00:17:02.464 "superblock": true, 00:17:02.464 "num_base_bdevs": 4, 00:17:02.464 "num_base_bdevs_discovered": 4, 00:17:02.464 "num_base_bdevs_operational": 4, 00:17:02.464 "base_bdevs_list": [ 00:17:02.464 { 00:17:02.464 "name": "spare", 00:17:02.464 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:17:02.464 "is_configured": true, 00:17:02.464 "data_offset": 2048, 00:17:02.464 "data_size": 63488 00:17:02.464 }, 00:17:02.464 { 00:17:02.464 "name": "BaseBdev2", 00:17:02.464 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:02.464 "is_configured": true, 00:17:02.464 "data_offset": 2048, 00:17:02.464 "data_size": 63488 00:17:02.464 }, 00:17:02.464 { 00:17:02.464 "name": "BaseBdev3", 00:17:02.464 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:02.464 "is_configured": true, 00:17:02.464 "data_offset": 2048, 00:17:02.464 "data_size": 63488 00:17:02.464 }, 00:17:02.464 { 00:17:02.464 "name": "BaseBdev4", 00:17:02.464 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:02.464 "is_configured": true, 00:17:02.464 "data_offset": 2048, 00:17:02.464 "data_size": 63488 00:17:02.464 } 00:17:02.464 ] 00:17:02.464 }' 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.464 17:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.033 "name": "raid_bdev1", 00:17:03.033 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:03.033 "strip_size_kb": 64, 00:17:03.033 "state": "online", 00:17:03.033 "raid_level": "raid5f", 00:17:03.033 "superblock": true, 00:17:03.033 "num_base_bdevs": 4, 00:17:03.033 "num_base_bdevs_discovered": 4, 00:17:03.033 "num_base_bdevs_operational": 4, 00:17:03.033 "base_bdevs_list": [ 00:17:03.033 { 00:17:03.033 "name": "spare", 00:17:03.033 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:17:03.033 "is_configured": true, 00:17:03.033 "data_offset": 2048, 00:17:03.033 "data_size": 63488 00:17:03.033 }, 00:17:03.033 { 00:17:03.033 "name": "BaseBdev2", 00:17:03.033 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:03.033 "is_configured": true, 00:17:03.033 "data_offset": 2048, 00:17:03.033 "data_size": 63488 00:17:03.033 }, 00:17:03.033 { 00:17:03.033 "name": "BaseBdev3", 00:17:03.033 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:03.033 "is_configured": true, 00:17:03.033 "data_offset": 2048, 00:17:03.033 "data_size": 63488 00:17:03.033 }, 00:17:03.033 { 00:17:03.033 "name": "BaseBdev4", 00:17:03.033 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:03.033 "is_configured": true, 00:17:03.033 "data_offset": 2048, 00:17:03.033 "data_size": 63488 00:17:03.033 } 00:17:03.033 ] 00:17:03.033 }' 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.033 [2024-12-07 17:33:36.386622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.033 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.292 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.292 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.292 "name": "raid_bdev1", 00:17:03.292 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:03.292 "strip_size_kb": 64, 00:17:03.292 "state": "online", 00:17:03.292 "raid_level": "raid5f", 00:17:03.292 "superblock": true, 00:17:03.292 "num_base_bdevs": 4, 00:17:03.292 "num_base_bdevs_discovered": 3, 00:17:03.292 "num_base_bdevs_operational": 3, 00:17:03.292 "base_bdevs_list": [ 00:17:03.292 { 00:17:03.292 "name": null, 00:17:03.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.292 "is_configured": false, 00:17:03.292 "data_offset": 0, 00:17:03.292 "data_size": 63488 00:17:03.292 }, 00:17:03.292 { 00:17:03.292 "name": "BaseBdev2", 00:17:03.292 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:03.292 "is_configured": true, 00:17:03.292 "data_offset": 2048, 00:17:03.292 "data_size": 63488 00:17:03.292 }, 00:17:03.292 { 00:17:03.292 "name": "BaseBdev3", 00:17:03.292 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:03.292 "is_configured": true, 00:17:03.292 "data_offset": 2048, 00:17:03.292 "data_size": 63488 00:17:03.292 }, 00:17:03.292 { 00:17:03.292 "name": "BaseBdev4", 00:17:03.292 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:03.292 "is_configured": true, 00:17:03.292 "data_offset": 2048, 00:17:03.292 "data_size": 63488 00:17:03.292 } 00:17:03.292 ] 00:17:03.292 }' 00:17:03.292 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.292 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.551 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:03.551 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.551 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.551 [2024-12-07 17:33:36.833878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.551 [2024-12-07 17:33:36.834176] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:03.551 [2024-12-07 17:33:36.834246] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:03.551 [2024-12-07 17:33:36.834309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.551 [2024-12-07 17:33:36.848489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:03.551 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.551 17:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:03.551 [2024-12-07 17:33:36.857563] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.490 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.490 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.490 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.490 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.490 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.490 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.490 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.490 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.490 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.750 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.751 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.751 "name": "raid_bdev1", 00:17:04.751 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:04.751 "strip_size_kb": 64, 00:17:04.751 "state": "online", 00:17:04.751 "raid_level": "raid5f", 00:17:04.751 "superblock": true, 00:17:04.751 "num_base_bdevs": 4, 00:17:04.751 "num_base_bdevs_discovered": 4, 00:17:04.751 "num_base_bdevs_operational": 4, 00:17:04.751 "process": { 00:17:04.751 "type": "rebuild", 00:17:04.751 "target": "spare", 00:17:04.751 "progress": { 00:17:04.751 "blocks": 19200, 00:17:04.751 "percent": 10 00:17:04.751 } 00:17:04.751 }, 00:17:04.751 "base_bdevs_list": [ 00:17:04.751 { 00:17:04.751 "name": "spare", 00:17:04.751 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:17:04.751 "is_configured": true, 00:17:04.751 "data_offset": 2048, 00:17:04.751 "data_size": 63488 00:17:04.751 }, 00:17:04.751 { 00:17:04.751 "name": "BaseBdev2", 00:17:04.751 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:04.751 "is_configured": true, 00:17:04.751 "data_offset": 2048, 00:17:04.751 "data_size": 63488 00:17:04.751 }, 00:17:04.751 { 00:17:04.751 "name": "BaseBdev3", 00:17:04.751 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:04.751 "is_configured": true, 00:17:04.751 "data_offset": 2048, 00:17:04.751 "data_size": 63488 00:17:04.751 }, 00:17:04.751 { 00:17:04.751 "name": "BaseBdev4", 00:17:04.751 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:04.751 "is_configured": true, 00:17:04.751 "data_offset": 2048, 00:17:04.751 "data_size": 63488 00:17:04.751 } 00:17:04.751 ] 00:17:04.751 }' 00:17:04.751 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.751 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.751 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.751 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.751 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:04.751 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.751 17:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.751 [2024-12-07 17:33:38.000687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.751 [2024-12-07 17:33:38.064116] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:04.751 [2024-12-07 17:33:38.064245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.751 [2024-12-07 17:33:38.064286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.751 [2024-12-07 17:33:38.064310] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.751 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.011 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.011 "name": "raid_bdev1", 00:17:05.011 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:05.011 "strip_size_kb": 64, 00:17:05.011 "state": "online", 00:17:05.011 "raid_level": "raid5f", 00:17:05.011 "superblock": true, 00:17:05.011 "num_base_bdevs": 4, 00:17:05.011 "num_base_bdevs_discovered": 3, 00:17:05.011 "num_base_bdevs_operational": 3, 00:17:05.011 "base_bdevs_list": [ 00:17:05.011 { 00:17:05.011 "name": null, 00:17:05.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.011 "is_configured": false, 00:17:05.012 "data_offset": 0, 00:17:05.012 "data_size": 63488 00:17:05.012 }, 00:17:05.012 { 00:17:05.012 "name": "BaseBdev2", 00:17:05.012 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:05.012 "is_configured": true, 00:17:05.012 "data_offset": 2048, 00:17:05.012 "data_size": 63488 00:17:05.012 }, 00:17:05.012 { 00:17:05.012 "name": "BaseBdev3", 00:17:05.012 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:05.012 "is_configured": true, 00:17:05.012 "data_offset": 2048, 00:17:05.012 "data_size": 63488 00:17:05.012 }, 00:17:05.012 { 00:17:05.012 "name": "BaseBdev4", 00:17:05.012 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:05.012 "is_configured": true, 00:17:05.012 "data_offset": 2048, 00:17:05.012 "data_size": 63488 00:17:05.012 } 00:17:05.012 ] 00:17:05.012 }' 00:17:05.012 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.012 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.271 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:05.271 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.271 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.271 [2024-12-07 17:33:38.497088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:05.271 [2024-12-07 17:33:38.497155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.271 [2024-12-07 17:33:38.497183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:05.271 [2024-12-07 17:33:38.497196] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.271 [2024-12-07 17:33:38.497686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.271 [2024-12-07 17:33:38.497721] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:05.271 [2024-12-07 17:33:38.497813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:05.271 [2024-12-07 17:33:38.497834] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:05.271 [2024-12-07 17:33:38.497844] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:05.271 [2024-12-07 17:33:38.497877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.271 [2024-12-07 17:33:38.512023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:05.271 spare 00:17:05.272 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.272 17:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:05.272 [2024-12-07 17:33:38.521174] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.211 "name": "raid_bdev1", 00:17:06.211 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:06.211 "strip_size_kb": 64, 00:17:06.211 "state": "online", 00:17:06.211 "raid_level": "raid5f", 00:17:06.211 "superblock": true, 00:17:06.211 "num_base_bdevs": 4, 00:17:06.211 "num_base_bdevs_discovered": 4, 00:17:06.211 "num_base_bdevs_operational": 4, 00:17:06.211 "process": { 00:17:06.211 "type": "rebuild", 00:17:06.211 "target": "spare", 00:17:06.211 "progress": { 00:17:06.211 "blocks": 19200, 00:17:06.211 "percent": 10 00:17:06.211 } 00:17:06.211 }, 00:17:06.211 "base_bdevs_list": [ 00:17:06.211 { 00:17:06.211 "name": "spare", 00:17:06.211 "uuid": "203039a0-43ab-565d-80c9-4b5ee09cbad0", 00:17:06.211 "is_configured": true, 00:17:06.211 "data_offset": 2048, 00:17:06.211 "data_size": 63488 00:17:06.211 }, 00:17:06.211 { 00:17:06.211 "name": "BaseBdev2", 00:17:06.211 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:06.211 "is_configured": true, 00:17:06.211 "data_offset": 2048, 00:17:06.211 "data_size": 63488 00:17:06.211 }, 00:17:06.211 { 00:17:06.211 "name": "BaseBdev3", 00:17:06.211 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:06.211 "is_configured": true, 00:17:06.211 "data_offset": 2048, 00:17:06.211 "data_size": 63488 00:17:06.211 }, 00:17:06.211 { 00:17:06.211 "name": "BaseBdev4", 00:17:06.211 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:06.211 "is_configured": true, 00:17:06.211 "data_offset": 2048, 00:17:06.211 "data_size": 63488 00:17:06.211 } 00:17:06.211 ] 00:17:06.211 }' 00:17:06.211 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.470 [2024-12-07 17:33:39.655912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.470 [2024-12-07 17:33:39.727602] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:06.470 [2024-12-07 17:33:39.727740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.470 [2024-12-07 17:33:39.727764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.470 [2024-12-07 17:33:39.727773] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.470 "name": "raid_bdev1", 00:17:06.470 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:06.470 "strip_size_kb": 64, 00:17:06.470 "state": "online", 00:17:06.470 "raid_level": "raid5f", 00:17:06.470 "superblock": true, 00:17:06.470 "num_base_bdevs": 4, 00:17:06.470 "num_base_bdevs_discovered": 3, 00:17:06.470 "num_base_bdevs_operational": 3, 00:17:06.470 "base_bdevs_list": [ 00:17:06.470 { 00:17:06.470 "name": null, 00:17:06.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.470 "is_configured": false, 00:17:06.470 "data_offset": 0, 00:17:06.470 "data_size": 63488 00:17:06.470 }, 00:17:06.470 { 00:17:06.470 "name": "BaseBdev2", 00:17:06.470 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:06.470 "is_configured": true, 00:17:06.470 "data_offset": 2048, 00:17:06.470 "data_size": 63488 00:17:06.470 }, 00:17:06.470 { 00:17:06.470 "name": "BaseBdev3", 00:17:06.470 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:06.470 "is_configured": true, 00:17:06.470 "data_offset": 2048, 00:17:06.470 "data_size": 63488 00:17:06.470 }, 00:17:06.470 { 00:17:06.470 "name": "BaseBdev4", 00:17:06.470 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:06.470 "is_configured": true, 00:17:06.470 "data_offset": 2048, 00:17:06.470 "data_size": 63488 00:17:06.470 } 00:17:06.470 ] 00:17:06.470 }' 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.470 17:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.040 "name": "raid_bdev1", 00:17:07.040 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:07.040 "strip_size_kb": 64, 00:17:07.040 "state": "online", 00:17:07.040 "raid_level": "raid5f", 00:17:07.040 "superblock": true, 00:17:07.040 "num_base_bdevs": 4, 00:17:07.040 "num_base_bdevs_discovered": 3, 00:17:07.040 "num_base_bdevs_operational": 3, 00:17:07.040 "base_bdevs_list": [ 00:17:07.040 { 00:17:07.040 "name": null, 00:17:07.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.040 "is_configured": false, 00:17:07.040 "data_offset": 0, 00:17:07.040 "data_size": 63488 00:17:07.040 }, 00:17:07.040 { 00:17:07.040 "name": "BaseBdev2", 00:17:07.040 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:07.040 "is_configured": true, 00:17:07.040 "data_offset": 2048, 00:17:07.040 "data_size": 63488 00:17:07.040 }, 00:17:07.040 { 00:17:07.040 "name": "BaseBdev3", 00:17:07.040 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:07.040 "is_configured": true, 00:17:07.040 "data_offset": 2048, 00:17:07.040 "data_size": 63488 00:17:07.040 }, 00:17:07.040 { 00:17:07.040 "name": "BaseBdev4", 00:17:07.040 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:07.040 "is_configured": true, 00:17:07.040 "data_offset": 2048, 00:17:07.040 "data_size": 63488 00:17:07.040 } 00:17:07.040 ] 00:17:07.040 }' 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.040 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.040 [2024-12-07 17:33:40.253488] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:07.040 [2024-12-07 17:33:40.253547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.041 [2024-12-07 17:33:40.253568] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:07.041 [2024-12-07 17:33:40.253577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.041 [2024-12-07 17:33:40.254048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.041 [2024-12-07 17:33:40.254068] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.041 [2024-12-07 17:33:40.254148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:07.041 [2024-12-07 17:33:40.254162] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:07.041 [2024-12-07 17:33:40.254174] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:07.041 [2024-12-07 17:33:40.254185] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:07.041 BaseBdev1 00:17:07.041 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.041 17:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.978 "name": "raid_bdev1", 00:17:07.978 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:07.978 "strip_size_kb": 64, 00:17:07.978 "state": "online", 00:17:07.978 "raid_level": "raid5f", 00:17:07.978 "superblock": true, 00:17:07.978 "num_base_bdevs": 4, 00:17:07.978 "num_base_bdevs_discovered": 3, 00:17:07.978 "num_base_bdevs_operational": 3, 00:17:07.978 "base_bdevs_list": [ 00:17:07.978 { 00:17:07.978 "name": null, 00:17:07.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.978 "is_configured": false, 00:17:07.978 "data_offset": 0, 00:17:07.978 "data_size": 63488 00:17:07.978 }, 00:17:07.978 { 00:17:07.978 "name": "BaseBdev2", 00:17:07.978 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:07.978 "is_configured": true, 00:17:07.978 "data_offset": 2048, 00:17:07.978 "data_size": 63488 00:17:07.978 }, 00:17:07.978 { 00:17:07.978 "name": "BaseBdev3", 00:17:07.978 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:07.978 "is_configured": true, 00:17:07.978 "data_offset": 2048, 00:17:07.978 "data_size": 63488 00:17:07.978 }, 00:17:07.978 { 00:17:07.978 "name": "BaseBdev4", 00:17:07.978 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:07.978 "is_configured": true, 00:17:07.978 "data_offset": 2048, 00:17:07.978 "data_size": 63488 00:17:07.978 } 00:17:07.978 ] 00:17:07.978 }' 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.978 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.545 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.546 "name": "raid_bdev1", 00:17:08.546 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:08.546 "strip_size_kb": 64, 00:17:08.546 "state": "online", 00:17:08.546 "raid_level": "raid5f", 00:17:08.546 "superblock": true, 00:17:08.546 "num_base_bdevs": 4, 00:17:08.546 "num_base_bdevs_discovered": 3, 00:17:08.546 "num_base_bdevs_operational": 3, 00:17:08.546 "base_bdevs_list": [ 00:17:08.546 { 00:17:08.546 "name": null, 00:17:08.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.546 "is_configured": false, 00:17:08.546 "data_offset": 0, 00:17:08.546 "data_size": 63488 00:17:08.546 }, 00:17:08.546 { 00:17:08.546 "name": "BaseBdev2", 00:17:08.546 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:08.546 "is_configured": true, 00:17:08.546 "data_offset": 2048, 00:17:08.546 "data_size": 63488 00:17:08.546 }, 00:17:08.546 { 00:17:08.546 "name": "BaseBdev3", 00:17:08.546 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:08.546 "is_configured": true, 00:17:08.546 "data_offset": 2048, 00:17:08.546 "data_size": 63488 00:17:08.546 }, 00:17:08.546 { 00:17:08.546 "name": "BaseBdev4", 00:17:08.546 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:08.546 "is_configured": true, 00:17:08.546 "data_offset": 2048, 00:17:08.546 "data_size": 63488 00:17:08.546 } 00:17:08.546 ] 00:17:08.546 }' 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.546 [2024-12-07 17:33:41.874896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.546 [2024-12-07 17:33:41.875158] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:08.546 [2024-12-07 17:33:41.875218] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:08.546 request: 00:17:08.546 { 00:17:08.546 "base_bdev": "BaseBdev1", 00:17:08.546 "raid_bdev": "raid_bdev1", 00:17:08.546 "method": "bdev_raid_add_base_bdev", 00:17:08.546 "req_id": 1 00:17:08.546 } 00:17:08.546 Got JSON-RPC error response 00:17:08.546 response: 00:17:08.546 { 00:17:08.546 "code": -22, 00:17:08.546 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:08.546 } 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.546 17:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.926 "name": "raid_bdev1", 00:17:09.926 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:09.926 "strip_size_kb": 64, 00:17:09.926 "state": "online", 00:17:09.926 "raid_level": "raid5f", 00:17:09.926 "superblock": true, 00:17:09.926 "num_base_bdevs": 4, 00:17:09.926 "num_base_bdevs_discovered": 3, 00:17:09.926 "num_base_bdevs_operational": 3, 00:17:09.926 "base_bdevs_list": [ 00:17:09.926 { 00:17:09.926 "name": null, 00:17:09.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.926 "is_configured": false, 00:17:09.926 "data_offset": 0, 00:17:09.926 "data_size": 63488 00:17:09.926 }, 00:17:09.926 { 00:17:09.926 "name": "BaseBdev2", 00:17:09.926 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:09.926 "is_configured": true, 00:17:09.926 "data_offset": 2048, 00:17:09.926 "data_size": 63488 00:17:09.926 }, 00:17:09.926 { 00:17:09.926 "name": "BaseBdev3", 00:17:09.926 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:09.926 "is_configured": true, 00:17:09.926 "data_offset": 2048, 00:17:09.926 "data_size": 63488 00:17:09.926 }, 00:17:09.926 { 00:17:09.926 "name": "BaseBdev4", 00:17:09.926 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:09.926 "is_configured": true, 00:17:09.926 "data_offset": 2048, 00:17:09.926 "data_size": 63488 00:17:09.926 } 00:17:09.926 ] 00:17:09.926 }' 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.926 17:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.186 "name": "raid_bdev1", 00:17:10.186 "uuid": "30d658c2-60f0-45f0-b490-d0ed0029fd55", 00:17:10.186 "strip_size_kb": 64, 00:17:10.186 "state": "online", 00:17:10.186 "raid_level": "raid5f", 00:17:10.186 "superblock": true, 00:17:10.186 "num_base_bdevs": 4, 00:17:10.186 "num_base_bdevs_discovered": 3, 00:17:10.186 "num_base_bdevs_operational": 3, 00:17:10.186 "base_bdevs_list": [ 00:17:10.186 { 00:17:10.186 "name": null, 00:17:10.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.186 "is_configured": false, 00:17:10.186 "data_offset": 0, 00:17:10.186 "data_size": 63488 00:17:10.186 }, 00:17:10.186 { 00:17:10.186 "name": "BaseBdev2", 00:17:10.186 "uuid": "5445732e-057b-50eb-9921-c0a7844127b9", 00:17:10.186 "is_configured": true, 00:17:10.186 "data_offset": 2048, 00:17:10.186 "data_size": 63488 00:17:10.186 }, 00:17:10.186 { 00:17:10.186 "name": "BaseBdev3", 00:17:10.186 "uuid": "ddbbf1eb-aa74-567f-8eec-f1fc6627594b", 00:17:10.186 "is_configured": true, 00:17:10.186 "data_offset": 2048, 00:17:10.186 "data_size": 63488 00:17:10.186 }, 00:17:10.186 { 00:17:10.186 "name": "BaseBdev4", 00:17:10.186 "uuid": "d4246290-41d7-53b5-8ffb-77eea8d45810", 00:17:10.186 "is_configured": true, 00:17:10.186 "data_offset": 2048, 00:17:10.186 "data_size": 63488 00:17:10.186 } 00:17:10.186 ] 00:17:10.186 }' 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85104 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85104 ']' 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85104 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85104 00:17:10.186 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.186 killing process with pid 85104 00:17:10.186 Received shutdown signal, test time was about 60.000000 seconds 00:17:10.186 00:17:10.186 Latency(us) 00:17:10.186 [2024-12-07T17:33:43.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.187 [2024-12-07T17:33:43.569Z] =================================================================================================================== 00:17:10.187 [2024-12-07T17:33:43.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:10.187 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.187 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85104' 00:17:10.187 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85104 00:17:10.187 [2024-12-07 17:33:43.523946] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.187 [2024-12-07 17:33:43.524084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.187 17:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85104 00:17:10.187 [2024-12-07 17:33:43.524164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.187 [2024-12-07 17:33:43.524179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:10.806 [2024-12-07 17:33:44.000195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.746 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:11.746 00:17:11.746 real 0m26.817s 00:17:11.746 user 0m33.599s 00:17:11.746 sys 0m3.019s 00:17:11.746 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.746 ************************************ 00:17:11.746 END TEST raid5f_rebuild_test_sb 00:17:11.746 ************************************ 00:17:11.746 17:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.746 17:33:45 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:11.746 17:33:45 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:11.746 17:33:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:11.746 17:33:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.746 17:33:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.004 ************************************ 00:17:12.004 START TEST raid_state_function_test_sb_4k 00:17:12.004 ************************************ 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:12.004 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85909 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85909' 00:17:12.005 Process raid pid: 85909 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85909 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85909 ']' 00:17:12.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.005 17:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.005 [2024-12-07 17:33:45.232862] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:12.005 [2024-12-07 17:33:45.232996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.263 [2024-12-07 17:33:45.405511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.263 [2024-12-07 17:33:45.513574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.522 [2024-12-07 17:33:45.703084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.522 [2024-12-07 17:33:45.703206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.782 [2024-12-07 17:33:46.068800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:12.782 [2024-12-07 17:33:46.068864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:12.782 [2024-12-07 17:33:46.068875] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.782 [2024-12-07 17:33:46.068884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.782 "name": "Existed_Raid", 00:17:12.782 "uuid": "ea507cb5-cde9-4800-b930-cded27273e2f", 00:17:12.782 "strip_size_kb": 0, 00:17:12.782 "state": "configuring", 00:17:12.782 "raid_level": "raid1", 00:17:12.782 "superblock": true, 00:17:12.782 "num_base_bdevs": 2, 00:17:12.782 "num_base_bdevs_discovered": 0, 00:17:12.782 "num_base_bdevs_operational": 2, 00:17:12.782 "base_bdevs_list": [ 00:17:12.782 { 00:17:12.782 "name": "BaseBdev1", 00:17:12.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.782 "is_configured": false, 00:17:12.782 "data_offset": 0, 00:17:12.782 "data_size": 0 00:17:12.782 }, 00:17:12.782 { 00:17:12.782 "name": "BaseBdev2", 00:17:12.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.782 "is_configured": false, 00:17:12.782 "data_offset": 0, 00:17:12.782 "data_size": 0 00:17:12.782 } 00:17:12.782 ] 00:17:12.782 }' 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.782 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.350 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:13.350 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.350 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.350 [2024-12-07 17:33:46.504040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.350 [2024-12-07 17:33:46.504137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:13.350 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.350 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:13.350 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.350 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.350 [2024-12-07 17:33:46.512018] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.350 [2024-12-07 17:33:46.512109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.351 [2024-12-07 17:33:46.512141] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.351 [2024-12-07 17:33:46.512170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.351 [2024-12-07 17:33:46.557276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.351 BaseBdev1 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.351 [ 00:17:13.351 { 00:17:13.351 "name": "BaseBdev1", 00:17:13.351 "aliases": [ 00:17:13.351 "529e6a95-f75a-424e-9ba8-96f51f7646ae" 00:17:13.351 ], 00:17:13.351 "product_name": "Malloc disk", 00:17:13.351 "block_size": 4096, 00:17:13.351 "num_blocks": 8192, 00:17:13.351 "uuid": "529e6a95-f75a-424e-9ba8-96f51f7646ae", 00:17:13.351 "assigned_rate_limits": { 00:17:13.351 "rw_ios_per_sec": 0, 00:17:13.351 "rw_mbytes_per_sec": 0, 00:17:13.351 "r_mbytes_per_sec": 0, 00:17:13.351 "w_mbytes_per_sec": 0 00:17:13.351 }, 00:17:13.351 "claimed": true, 00:17:13.351 "claim_type": "exclusive_write", 00:17:13.351 "zoned": false, 00:17:13.351 "supported_io_types": { 00:17:13.351 "read": true, 00:17:13.351 "write": true, 00:17:13.351 "unmap": true, 00:17:13.351 "flush": true, 00:17:13.351 "reset": true, 00:17:13.351 "nvme_admin": false, 00:17:13.351 "nvme_io": false, 00:17:13.351 "nvme_io_md": false, 00:17:13.351 "write_zeroes": true, 00:17:13.351 "zcopy": true, 00:17:13.351 "get_zone_info": false, 00:17:13.351 "zone_management": false, 00:17:13.351 "zone_append": false, 00:17:13.351 "compare": false, 00:17:13.351 "compare_and_write": false, 00:17:13.351 "abort": true, 00:17:13.351 "seek_hole": false, 00:17:13.351 "seek_data": false, 00:17:13.351 "copy": true, 00:17:13.351 "nvme_iov_md": false 00:17:13.351 }, 00:17:13.351 "memory_domains": [ 00:17:13.351 { 00:17:13.351 "dma_device_id": "system", 00:17:13.351 "dma_device_type": 1 00:17:13.351 }, 00:17:13.351 { 00:17:13.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.351 "dma_device_type": 2 00:17:13.351 } 00:17:13.351 ], 00:17:13.351 "driver_specific": {} 00:17:13.351 } 00:17:13.351 ] 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.351 "name": "Existed_Raid", 00:17:13.351 "uuid": "5219ab15-1bee-4fb1-a4a5-e3c549cb7e0b", 00:17:13.351 "strip_size_kb": 0, 00:17:13.351 "state": "configuring", 00:17:13.351 "raid_level": "raid1", 00:17:13.351 "superblock": true, 00:17:13.351 "num_base_bdevs": 2, 00:17:13.351 "num_base_bdevs_discovered": 1, 00:17:13.351 "num_base_bdevs_operational": 2, 00:17:13.351 "base_bdevs_list": [ 00:17:13.351 { 00:17:13.351 "name": "BaseBdev1", 00:17:13.351 "uuid": "529e6a95-f75a-424e-9ba8-96f51f7646ae", 00:17:13.351 "is_configured": true, 00:17:13.351 "data_offset": 256, 00:17:13.351 "data_size": 7936 00:17:13.351 }, 00:17:13.351 { 00:17:13.351 "name": "BaseBdev2", 00:17:13.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.351 "is_configured": false, 00:17:13.351 "data_offset": 0, 00:17:13.351 "data_size": 0 00:17:13.351 } 00:17:13.351 ] 00:17:13.351 }' 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.351 17:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.920 [2024-12-07 17:33:47.052487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.920 [2024-12-07 17:33:47.052631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.920 [2024-12-07 17:33:47.064505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.920 [2024-12-07 17:33:47.066319] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.920 [2024-12-07 17:33:47.066395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.920 "name": "Existed_Raid", 00:17:13.920 "uuid": "a1509a3a-3f23-4a27-b89d-2af131daab7b", 00:17:13.920 "strip_size_kb": 0, 00:17:13.920 "state": "configuring", 00:17:13.920 "raid_level": "raid1", 00:17:13.920 "superblock": true, 00:17:13.920 "num_base_bdevs": 2, 00:17:13.920 "num_base_bdevs_discovered": 1, 00:17:13.920 "num_base_bdevs_operational": 2, 00:17:13.920 "base_bdevs_list": [ 00:17:13.920 { 00:17:13.920 "name": "BaseBdev1", 00:17:13.920 "uuid": "529e6a95-f75a-424e-9ba8-96f51f7646ae", 00:17:13.920 "is_configured": true, 00:17:13.920 "data_offset": 256, 00:17:13.920 "data_size": 7936 00:17:13.920 }, 00:17:13.920 { 00:17:13.920 "name": "BaseBdev2", 00:17:13.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.920 "is_configured": false, 00:17:13.920 "data_offset": 0, 00:17:13.920 "data_size": 0 00:17:13.920 } 00:17:13.920 ] 00:17:13.920 }' 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.920 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.180 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:14.180 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.180 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 [2024-12-07 17:33:47.570226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.439 [2024-12-07 17:33:47.570513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:14.439 [2024-12-07 17:33:47.570528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:14.439 [2024-12-07 17:33:47.570779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:14.439 [2024-12-07 17:33:47.570966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:14.439 BaseBdev2 00:17:14.439 [2024-12-07 17:33:47.570983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:14.439 [2024-12-07 17:33:47.571119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.439 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 [ 00:17:14.439 { 00:17:14.440 "name": "BaseBdev2", 00:17:14.440 "aliases": [ 00:17:14.440 "ef73989d-d642-4732-8938-5ad5d7248f40" 00:17:14.440 ], 00:17:14.440 "product_name": "Malloc disk", 00:17:14.440 "block_size": 4096, 00:17:14.440 "num_blocks": 8192, 00:17:14.440 "uuid": "ef73989d-d642-4732-8938-5ad5d7248f40", 00:17:14.440 "assigned_rate_limits": { 00:17:14.440 "rw_ios_per_sec": 0, 00:17:14.440 "rw_mbytes_per_sec": 0, 00:17:14.440 "r_mbytes_per_sec": 0, 00:17:14.440 "w_mbytes_per_sec": 0 00:17:14.440 }, 00:17:14.440 "claimed": true, 00:17:14.440 "claim_type": "exclusive_write", 00:17:14.440 "zoned": false, 00:17:14.440 "supported_io_types": { 00:17:14.440 "read": true, 00:17:14.440 "write": true, 00:17:14.440 "unmap": true, 00:17:14.440 "flush": true, 00:17:14.440 "reset": true, 00:17:14.440 "nvme_admin": false, 00:17:14.440 "nvme_io": false, 00:17:14.440 "nvme_io_md": false, 00:17:14.440 "write_zeroes": true, 00:17:14.440 "zcopy": true, 00:17:14.440 "get_zone_info": false, 00:17:14.440 "zone_management": false, 00:17:14.440 "zone_append": false, 00:17:14.440 "compare": false, 00:17:14.440 "compare_and_write": false, 00:17:14.440 "abort": true, 00:17:14.440 "seek_hole": false, 00:17:14.440 "seek_data": false, 00:17:14.440 "copy": true, 00:17:14.440 "nvme_iov_md": false 00:17:14.440 }, 00:17:14.440 "memory_domains": [ 00:17:14.440 { 00:17:14.440 "dma_device_id": "system", 00:17:14.440 "dma_device_type": 1 00:17:14.440 }, 00:17:14.440 { 00:17:14.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.440 "dma_device_type": 2 00:17:14.440 } 00:17:14.440 ], 00:17:14.440 "driver_specific": {} 00:17:14.440 } 00:17:14.440 ] 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.440 "name": "Existed_Raid", 00:17:14.440 "uuid": "a1509a3a-3f23-4a27-b89d-2af131daab7b", 00:17:14.440 "strip_size_kb": 0, 00:17:14.440 "state": "online", 00:17:14.440 "raid_level": "raid1", 00:17:14.440 "superblock": true, 00:17:14.440 "num_base_bdevs": 2, 00:17:14.440 "num_base_bdevs_discovered": 2, 00:17:14.440 "num_base_bdevs_operational": 2, 00:17:14.440 "base_bdevs_list": [ 00:17:14.440 { 00:17:14.440 "name": "BaseBdev1", 00:17:14.440 "uuid": "529e6a95-f75a-424e-9ba8-96f51f7646ae", 00:17:14.440 "is_configured": true, 00:17:14.440 "data_offset": 256, 00:17:14.440 "data_size": 7936 00:17:14.440 }, 00:17:14.440 { 00:17:14.440 "name": "BaseBdev2", 00:17:14.440 "uuid": "ef73989d-d642-4732-8938-5ad5d7248f40", 00:17:14.440 "is_configured": true, 00:17:14.440 "data_offset": 256, 00:17:14.440 "data_size": 7936 00:17:14.440 } 00:17:14.440 ] 00:17:14.440 }' 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.440 17:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.699 [2024-12-07 17:33:48.041743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.699 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:14.699 "name": "Existed_Raid", 00:17:14.699 "aliases": [ 00:17:14.699 "a1509a3a-3f23-4a27-b89d-2af131daab7b" 00:17:14.699 ], 00:17:14.699 "product_name": "Raid Volume", 00:17:14.699 "block_size": 4096, 00:17:14.699 "num_blocks": 7936, 00:17:14.700 "uuid": "a1509a3a-3f23-4a27-b89d-2af131daab7b", 00:17:14.700 "assigned_rate_limits": { 00:17:14.700 "rw_ios_per_sec": 0, 00:17:14.700 "rw_mbytes_per_sec": 0, 00:17:14.700 "r_mbytes_per_sec": 0, 00:17:14.700 "w_mbytes_per_sec": 0 00:17:14.700 }, 00:17:14.700 "claimed": false, 00:17:14.700 "zoned": false, 00:17:14.700 "supported_io_types": { 00:17:14.700 "read": true, 00:17:14.700 "write": true, 00:17:14.700 "unmap": false, 00:17:14.700 "flush": false, 00:17:14.700 "reset": true, 00:17:14.700 "nvme_admin": false, 00:17:14.700 "nvme_io": false, 00:17:14.700 "nvme_io_md": false, 00:17:14.700 "write_zeroes": true, 00:17:14.700 "zcopy": false, 00:17:14.700 "get_zone_info": false, 00:17:14.700 "zone_management": false, 00:17:14.700 "zone_append": false, 00:17:14.700 "compare": false, 00:17:14.700 "compare_and_write": false, 00:17:14.700 "abort": false, 00:17:14.700 "seek_hole": false, 00:17:14.700 "seek_data": false, 00:17:14.700 "copy": false, 00:17:14.700 "nvme_iov_md": false 00:17:14.700 }, 00:17:14.700 "memory_domains": [ 00:17:14.700 { 00:17:14.700 "dma_device_id": "system", 00:17:14.700 "dma_device_type": 1 00:17:14.700 }, 00:17:14.700 { 00:17:14.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.700 "dma_device_type": 2 00:17:14.700 }, 00:17:14.700 { 00:17:14.700 "dma_device_id": "system", 00:17:14.700 "dma_device_type": 1 00:17:14.700 }, 00:17:14.700 { 00:17:14.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.700 "dma_device_type": 2 00:17:14.700 } 00:17:14.700 ], 00:17:14.700 "driver_specific": { 00:17:14.700 "raid": { 00:17:14.700 "uuid": "a1509a3a-3f23-4a27-b89d-2af131daab7b", 00:17:14.700 "strip_size_kb": 0, 00:17:14.700 "state": "online", 00:17:14.700 "raid_level": "raid1", 00:17:14.700 "superblock": true, 00:17:14.700 "num_base_bdevs": 2, 00:17:14.700 "num_base_bdevs_discovered": 2, 00:17:14.700 "num_base_bdevs_operational": 2, 00:17:14.700 "base_bdevs_list": [ 00:17:14.700 { 00:17:14.700 "name": "BaseBdev1", 00:17:14.700 "uuid": "529e6a95-f75a-424e-9ba8-96f51f7646ae", 00:17:14.700 "is_configured": true, 00:17:14.700 "data_offset": 256, 00:17:14.700 "data_size": 7936 00:17:14.700 }, 00:17:14.700 { 00:17:14.700 "name": "BaseBdev2", 00:17:14.700 "uuid": "ef73989d-d642-4732-8938-5ad5d7248f40", 00:17:14.700 "is_configured": true, 00:17:14.700 "data_offset": 256, 00:17:14.700 "data_size": 7936 00:17:14.700 } 00:17:14.700 ] 00:17:14.700 } 00:17:14.700 } 00:17:14.700 }' 00:17:14.700 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:14.959 BaseBdev2' 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.959 [2024-12-07 17:33:48.237169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.959 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.219 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.219 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.219 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.219 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.219 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.219 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.219 "name": "Existed_Raid", 00:17:15.219 "uuid": "a1509a3a-3f23-4a27-b89d-2af131daab7b", 00:17:15.219 "strip_size_kb": 0, 00:17:15.219 "state": "online", 00:17:15.219 "raid_level": "raid1", 00:17:15.219 "superblock": true, 00:17:15.219 "num_base_bdevs": 2, 00:17:15.219 "num_base_bdevs_discovered": 1, 00:17:15.219 "num_base_bdevs_operational": 1, 00:17:15.219 "base_bdevs_list": [ 00:17:15.219 { 00:17:15.219 "name": null, 00:17:15.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.219 "is_configured": false, 00:17:15.219 "data_offset": 0, 00:17:15.219 "data_size": 7936 00:17:15.219 }, 00:17:15.219 { 00:17:15.219 "name": "BaseBdev2", 00:17:15.219 "uuid": "ef73989d-d642-4732-8938-5ad5d7248f40", 00:17:15.219 "is_configured": true, 00:17:15.219 "data_offset": 256, 00:17:15.219 "data_size": 7936 00:17:15.219 } 00:17:15.219 ] 00:17:15.219 }' 00:17:15.219 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.219 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.477 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.477 [2024-12-07 17:33:48.774797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:15.477 [2024-12-07 17:33:48.774904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.737 [2024-12-07 17:33:48.872277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.737 [2024-12-07 17:33:48.872329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.737 [2024-12-07 17:33:48.872341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85909 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85909 ']' 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85909 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85909 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.737 killing process with pid 85909 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85909' 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85909 00:17:15.737 [2024-12-07 17:33:48.967297] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.737 17:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85909 00:17:15.737 [2024-12-07 17:33:48.984356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.114 17:33:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:17.114 00:17:17.114 real 0m4.946s 00:17:17.114 user 0m7.078s 00:17:17.114 sys 0m0.885s 00:17:17.114 17:33:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.114 17:33:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.114 ************************************ 00:17:17.114 END TEST raid_state_function_test_sb_4k 00:17:17.114 ************************************ 00:17:17.114 17:33:50 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:17.114 17:33:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:17.114 17:33:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.114 17:33:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.114 ************************************ 00:17:17.114 START TEST raid_superblock_test_4k 00:17:17.114 ************************************ 00:17:17.114 17:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:17.114 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86161 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86161 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86161 ']' 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.115 17:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.115 [2024-12-07 17:33:50.253204] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:17.115 [2024-12-07 17:33:50.253423] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86161 ] 00:17:17.115 [2024-12-07 17:33:50.418476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.374 [2024-12-07 17:33:50.530647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.374 [2024-12-07 17:33:50.717974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.374 [2024-12-07 17:33:50.718088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.942 malloc1 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.942 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.942 [2024-12-07 17:33:51.139270] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:17.942 [2024-12-07 17:33:51.139408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.943 [2024-12-07 17:33:51.139477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:17.943 [2024-12-07 17:33:51.139521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.943 [2024-12-07 17:33:51.141707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.943 [2024-12-07 17:33:51.141797] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:17.943 pt1 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.943 malloc2 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.943 [2024-12-07 17:33:51.199287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.943 [2024-12-07 17:33:51.199346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.943 [2024-12-07 17:33:51.199388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:17.943 [2024-12-07 17:33:51.199396] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.943 [2024-12-07 17:33:51.201430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.943 [2024-12-07 17:33:51.201532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.943 pt2 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.943 [2024-12-07 17:33:51.211310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:17.943 [2024-12-07 17:33:51.213107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.943 [2024-12-07 17:33:51.213324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:17.943 [2024-12-07 17:33:51.213376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:17.943 [2024-12-07 17:33:51.213620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:17.943 [2024-12-07 17:33:51.213823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:17.943 [2024-12-07 17:33:51.213871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:17.943 [2024-12-07 17:33:51.214060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.943 "name": "raid_bdev1", 00:17:17.943 "uuid": "d0220b40-4814-4d4d-9322-1567ca9fcbdd", 00:17:17.943 "strip_size_kb": 0, 00:17:17.943 "state": "online", 00:17:17.943 "raid_level": "raid1", 00:17:17.943 "superblock": true, 00:17:17.943 "num_base_bdevs": 2, 00:17:17.943 "num_base_bdevs_discovered": 2, 00:17:17.943 "num_base_bdevs_operational": 2, 00:17:17.943 "base_bdevs_list": [ 00:17:17.943 { 00:17:17.943 "name": "pt1", 00:17:17.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.943 "is_configured": true, 00:17:17.943 "data_offset": 256, 00:17:17.943 "data_size": 7936 00:17:17.943 }, 00:17:17.943 { 00:17:17.943 "name": "pt2", 00:17:17.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.943 "is_configured": true, 00:17:17.943 "data_offset": 256, 00:17:17.943 "data_size": 7936 00:17:17.943 } 00:17:17.943 ] 00:17:17.943 }' 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.943 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:18.512 [2024-12-07 17:33:51.670772] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:18.512 "name": "raid_bdev1", 00:17:18.512 "aliases": [ 00:17:18.512 "d0220b40-4814-4d4d-9322-1567ca9fcbdd" 00:17:18.512 ], 00:17:18.512 "product_name": "Raid Volume", 00:17:18.512 "block_size": 4096, 00:17:18.512 "num_blocks": 7936, 00:17:18.512 "uuid": "d0220b40-4814-4d4d-9322-1567ca9fcbdd", 00:17:18.512 "assigned_rate_limits": { 00:17:18.512 "rw_ios_per_sec": 0, 00:17:18.512 "rw_mbytes_per_sec": 0, 00:17:18.512 "r_mbytes_per_sec": 0, 00:17:18.512 "w_mbytes_per_sec": 0 00:17:18.512 }, 00:17:18.512 "claimed": false, 00:17:18.512 "zoned": false, 00:17:18.512 "supported_io_types": { 00:17:18.512 "read": true, 00:17:18.512 "write": true, 00:17:18.512 "unmap": false, 00:17:18.512 "flush": false, 00:17:18.512 "reset": true, 00:17:18.512 "nvme_admin": false, 00:17:18.512 "nvme_io": false, 00:17:18.512 "nvme_io_md": false, 00:17:18.512 "write_zeroes": true, 00:17:18.512 "zcopy": false, 00:17:18.512 "get_zone_info": false, 00:17:18.512 "zone_management": false, 00:17:18.512 "zone_append": false, 00:17:18.512 "compare": false, 00:17:18.512 "compare_and_write": false, 00:17:18.512 "abort": false, 00:17:18.512 "seek_hole": false, 00:17:18.512 "seek_data": false, 00:17:18.512 "copy": false, 00:17:18.512 "nvme_iov_md": false 00:17:18.512 }, 00:17:18.512 "memory_domains": [ 00:17:18.512 { 00:17:18.512 "dma_device_id": "system", 00:17:18.512 "dma_device_type": 1 00:17:18.512 }, 00:17:18.512 { 00:17:18.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.512 "dma_device_type": 2 00:17:18.512 }, 00:17:18.512 { 00:17:18.512 "dma_device_id": "system", 00:17:18.512 "dma_device_type": 1 00:17:18.512 }, 00:17:18.512 { 00:17:18.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.512 "dma_device_type": 2 00:17:18.512 } 00:17:18.512 ], 00:17:18.512 "driver_specific": { 00:17:18.512 "raid": { 00:17:18.512 "uuid": "d0220b40-4814-4d4d-9322-1567ca9fcbdd", 00:17:18.512 "strip_size_kb": 0, 00:17:18.512 "state": "online", 00:17:18.512 "raid_level": "raid1", 00:17:18.512 "superblock": true, 00:17:18.512 "num_base_bdevs": 2, 00:17:18.512 "num_base_bdevs_discovered": 2, 00:17:18.512 "num_base_bdevs_operational": 2, 00:17:18.512 "base_bdevs_list": [ 00:17:18.512 { 00:17:18.512 "name": "pt1", 00:17:18.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:18.512 "is_configured": true, 00:17:18.512 "data_offset": 256, 00:17:18.512 "data_size": 7936 00:17:18.512 }, 00:17:18.512 { 00:17:18.512 "name": "pt2", 00:17:18.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.512 "is_configured": true, 00:17:18.512 "data_offset": 256, 00:17:18.512 "data_size": 7936 00:17:18.512 } 00:17:18.512 ] 00:17:18.512 } 00:17:18.512 } 00:17:18.512 }' 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:18.512 pt2' 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:18.512 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:18.513 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:18.513 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.513 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.513 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.513 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:18.513 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:18.513 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:18.513 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:18.513 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.513 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.513 [2024-12-07 17:33:51.882404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d0220b40-4814-4d4d-9322-1567ca9fcbdd 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z d0220b40-4814-4d4d-9322-1567ca9fcbdd ']' 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.774 [2024-12-07 17:33:51.930043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.774 [2024-12-07 17:33:51.930068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.774 [2024-12-07 17:33:51.930147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.774 [2024-12-07 17:33:51.930201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.774 [2024-12-07 17:33:51.930213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.774 17:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.774 [2024-12-07 17:33:52.073820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:18.774 [2024-12-07 17:33:52.075603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:18.774 [2024-12-07 17:33:52.075666] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:18.774 [2024-12-07 17:33:52.075732] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:18.774 [2024-12-07 17:33:52.075746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.774 [2024-12-07 17:33:52.075756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:18.774 request: 00:17:18.774 { 00:17:18.774 "name": "raid_bdev1", 00:17:18.774 "raid_level": "raid1", 00:17:18.774 "base_bdevs": [ 00:17:18.774 "malloc1", 00:17:18.774 "malloc2" 00:17:18.774 ], 00:17:18.774 "superblock": false, 00:17:18.774 "method": "bdev_raid_create", 00:17:18.774 "req_id": 1 00:17:18.774 } 00:17:18.774 Got JSON-RPC error response 00:17:18.774 response: 00:17:18.774 { 00:17:18.774 "code": -17, 00:17:18.774 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:18.774 } 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.774 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.774 [2024-12-07 17:33:52.141690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.774 [2024-12-07 17:33:52.141739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.774 [2024-12-07 17:33:52.141759] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:18.774 [2024-12-07 17:33:52.141769] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.774 [2024-12-07 17:33:52.143898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.774 [2024-12-07 17:33:52.144003] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.774 [2024-12-07 17:33:52.144102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:18.775 [2024-12-07 17:33:52.144164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:18.775 pt1 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.775 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.036 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.036 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.036 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.036 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.036 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.036 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.036 "name": "raid_bdev1", 00:17:19.036 "uuid": "d0220b40-4814-4d4d-9322-1567ca9fcbdd", 00:17:19.036 "strip_size_kb": 0, 00:17:19.036 "state": "configuring", 00:17:19.036 "raid_level": "raid1", 00:17:19.036 "superblock": true, 00:17:19.036 "num_base_bdevs": 2, 00:17:19.036 "num_base_bdevs_discovered": 1, 00:17:19.036 "num_base_bdevs_operational": 2, 00:17:19.036 "base_bdevs_list": [ 00:17:19.036 { 00:17:19.036 "name": "pt1", 00:17:19.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.036 "is_configured": true, 00:17:19.036 "data_offset": 256, 00:17:19.036 "data_size": 7936 00:17:19.036 }, 00:17:19.036 { 00:17:19.036 "name": null, 00:17:19.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.036 "is_configured": false, 00:17:19.036 "data_offset": 256, 00:17:19.036 "data_size": 7936 00:17:19.036 } 00:17:19.036 ] 00:17:19.036 }' 00:17:19.036 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.036 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.296 [2024-12-07 17:33:52.573022] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:19.296 [2024-12-07 17:33:52.573137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.296 [2024-12-07 17:33:52.573177] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:19.296 [2024-12-07 17:33:52.573208] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.296 [2024-12-07 17:33:52.573704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.296 [2024-12-07 17:33:52.573768] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:19.296 [2024-12-07 17:33:52.573881] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:19.296 [2024-12-07 17:33:52.573950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.296 [2024-12-07 17:33:52.574105] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:19.296 [2024-12-07 17:33:52.574150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:19.296 [2024-12-07 17:33:52.574419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:19.296 [2024-12-07 17:33:52.574607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:19.296 [2024-12-07 17:33:52.574646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:19.296 [2024-12-07 17:33:52.574814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.296 pt2 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.296 "name": "raid_bdev1", 00:17:19.296 "uuid": "d0220b40-4814-4d4d-9322-1567ca9fcbdd", 00:17:19.296 "strip_size_kb": 0, 00:17:19.296 "state": "online", 00:17:19.296 "raid_level": "raid1", 00:17:19.296 "superblock": true, 00:17:19.296 "num_base_bdevs": 2, 00:17:19.296 "num_base_bdevs_discovered": 2, 00:17:19.296 "num_base_bdevs_operational": 2, 00:17:19.296 "base_bdevs_list": [ 00:17:19.296 { 00:17:19.296 "name": "pt1", 00:17:19.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.296 "is_configured": true, 00:17:19.296 "data_offset": 256, 00:17:19.296 "data_size": 7936 00:17:19.296 }, 00:17:19.296 { 00:17:19.296 "name": "pt2", 00:17:19.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.296 "is_configured": true, 00:17:19.296 "data_offset": 256, 00:17:19.296 "data_size": 7936 00:17:19.296 } 00:17:19.296 ] 00:17:19.296 }' 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.296 17:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.865 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:19.865 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:19.865 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:19.865 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:19.865 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:19.865 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:19.865 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.865 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.866 [2024-12-07 17:33:53.028409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:19.866 "name": "raid_bdev1", 00:17:19.866 "aliases": [ 00:17:19.866 "d0220b40-4814-4d4d-9322-1567ca9fcbdd" 00:17:19.866 ], 00:17:19.866 "product_name": "Raid Volume", 00:17:19.866 "block_size": 4096, 00:17:19.866 "num_blocks": 7936, 00:17:19.866 "uuid": "d0220b40-4814-4d4d-9322-1567ca9fcbdd", 00:17:19.866 "assigned_rate_limits": { 00:17:19.866 "rw_ios_per_sec": 0, 00:17:19.866 "rw_mbytes_per_sec": 0, 00:17:19.866 "r_mbytes_per_sec": 0, 00:17:19.866 "w_mbytes_per_sec": 0 00:17:19.866 }, 00:17:19.866 "claimed": false, 00:17:19.866 "zoned": false, 00:17:19.866 "supported_io_types": { 00:17:19.866 "read": true, 00:17:19.866 "write": true, 00:17:19.866 "unmap": false, 00:17:19.866 "flush": false, 00:17:19.866 "reset": true, 00:17:19.866 "nvme_admin": false, 00:17:19.866 "nvme_io": false, 00:17:19.866 "nvme_io_md": false, 00:17:19.866 "write_zeroes": true, 00:17:19.866 "zcopy": false, 00:17:19.866 "get_zone_info": false, 00:17:19.866 "zone_management": false, 00:17:19.866 "zone_append": false, 00:17:19.866 "compare": false, 00:17:19.866 "compare_and_write": false, 00:17:19.866 "abort": false, 00:17:19.866 "seek_hole": false, 00:17:19.866 "seek_data": false, 00:17:19.866 "copy": false, 00:17:19.866 "nvme_iov_md": false 00:17:19.866 }, 00:17:19.866 "memory_domains": [ 00:17:19.866 { 00:17:19.866 "dma_device_id": "system", 00:17:19.866 "dma_device_type": 1 00:17:19.866 }, 00:17:19.866 { 00:17:19.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.866 "dma_device_type": 2 00:17:19.866 }, 00:17:19.866 { 00:17:19.866 "dma_device_id": "system", 00:17:19.866 "dma_device_type": 1 00:17:19.866 }, 00:17:19.866 { 00:17:19.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.866 "dma_device_type": 2 00:17:19.866 } 00:17:19.866 ], 00:17:19.866 "driver_specific": { 00:17:19.866 "raid": { 00:17:19.866 "uuid": "d0220b40-4814-4d4d-9322-1567ca9fcbdd", 00:17:19.866 "strip_size_kb": 0, 00:17:19.866 "state": "online", 00:17:19.866 "raid_level": "raid1", 00:17:19.866 "superblock": true, 00:17:19.866 "num_base_bdevs": 2, 00:17:19.866 "num_base_bdevs_discovered": 2, 00:17:19.866 "num_base_bdevs_operational": 2, 00:17:19.866 "base_bdevs_list": [ 00:17:19.866 { 00:17:19.866 "name": "pt1", 00:17:19.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.866 "is_configured": true, 00:17:19.866 "data_offset": 256, 00:17:19.866 "data_size": 7936 00:17:19.866 }, 00:17:19.866 { 00:17:19.866 "name": "pt2", 00:17:19.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.866 "is_configured": true, 00:17:19.866 "data_offset": 256, 00:17:19.866 "data_size": 7936 00:17:19.866 } 00:17:19.866 ] 00:17:19.866 } 00:17:19.866 } 00:17:19.866 }' 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:19.866 pt2' 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.866 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:19.866 [2024-12-07 17:33:53.236065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' d0220b40-4814-4d4d-9322-1567ca9fcbdd '!=' d0220b40-4814-4d4d-9322-1567ca9fcbdd ']' 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.127 [2024-12-07 17:33:53.283785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.127 "name": "raid_bdev1", 00:17:20.127 "uuid": "d0220b40-4814-4d4d-9322-1567ca9fcbdd", 00:17:20.127 "strip_size_kb": 0, 00:17:20.127 "state": "online", 00:17:20.127 "raid_level": "raid1", 00:17:20.127 "superblock": true, 00:17:20.127 "num_base_bdevs": 2, 00:17:20.127 "num_base_bdevs_discovered": 1, 00:17:20.127 "num_base_bdevs_operational": 1, 00:17:20.127 "base_bdevs_list": [ 00:17:20.127 { 00:17:20.127 "name": null, 00:17:20.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.127 "is_configured": false, 00:17:20.127 "data_offset": 0, 00:17:20.127 "data_size": 7936 00:17:20.127 }, 00:17:20.127 { 00:17:20.127 "name": "pt2", 00:17:20.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.127 "is_configured": true, 00:17:20.127 "data_offset": 256, 00:17:20.127 "data_size": 7936 00:17:20.127 } 00:17:20.127 ] 00:17:20.127 }' 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.127 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.386 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:20.386 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.386 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.386 [2024-12-07 17:33:53.739038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.386 [2024-12-07 17:33:53.739109] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.386 [2024-12-07 17:33:53.739208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.386 [2024-12-07 17:33:53.739271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.386 [2024-12-07 17:33:53.739320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:20.386 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.386 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.386 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.387 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.387 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:20.387 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.647 [2024-12-07 17:33:53.810874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.647 [2024-12-07 17:33:53.810943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.647 [2024-12-07 17:33:53.810962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:20.647 [2024-12-07 17:33:53.810972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.647 [2024-12-07 17:33:53.813258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.647 [2024-12-07 17:33:53.813298] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.647 [2024-12-07 17:33:53.813375] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:20.647 [2024-12-07 17:33:53.813422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.647 [2024-12-07 17:33:53.813533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:20.647 [2024-12-07 17:33:53.813545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.647 [2024-12-07 17:33:53.813762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:20.647 [2024-12-07 17:33:53.813921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:20.647 [2024-12-07 17:33:53.813948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:20.647 [2024-12-07 17:33:53.814094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.647 pt2 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.647 "name": "raid_bdev1", 00:17:20.647 "uuid": "d0220b40-4814-4d4d-9322-1567ca9fcbdd", 00:17:20.647 "strip_size_kb": 0, 00:17:20.647 "state": "online", 00:17:20.647 "raid_level": "raid1", 00:17:20.647 "superblock": true, 00:17:20.647 "num_base_bdevs": 2, 00:17:20.647 "num_base_bdevs_discovered": 1, 00:17:20.647 "num_base_bdevs_operational": 1, 00:17:20.647 "base_bdevs_list": [ 00:17:20.647 { 00:17:20.647 "name": null, 00:17:20.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.647 "is_configured": false, 00:17:20.647 "data_offset": 256, 00:17:20.647 "data_size": 7936 00:17:20.647 }, 00:17:20.647 { 00:17:20.647 "name": "pt2", 00:17:20.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.647 "is_configured": true, 00:17:20.647 "data_offset": 256, 00:17:20.647 "data_size": 7936 00:17:20.647 } 00:17:20.647 ] 00:17:20.647 }' 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.647 17:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.907 [2024-12-07 17:33:54.198199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.907 [2024-12-07 17:33:54.198274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.907 [2024-12-07 17:33:54.198371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.907 [2024-12-07 17:33:54.198432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.907 [2024-12-07 17:33:54.198484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.907 [2024-12-07 17:33:54.262105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.907 [2024-12-07 17:33:54.262207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.907 [2024-12-07 17:33:54.262240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:20.907 [2024-12-07 17:33:54.262266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.907 [2024-12-07 17:33:54.264494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.907 [2024-12-07 17:33:54.264567] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.907 [2024-12-07 17:33:54.264666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:20.907 [2024-12-07 17:33:54.264748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:20.907 [2024-12-07 17:33:54.264973] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:20.907 [2024-12-07 17:33:54.265029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.907 [2024-12-07 17:33:54.265064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:20.907 [2024-12-07 17:33:54.265162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.907 [2024-12-07 17:33:54.265266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:20.907 [2024-12-07 17:33:54.265304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.907 [2024-12-07 17:33:54.265552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:20.907 [2024-12-07 17:33:54.265734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:20.907 [2024-12-07 17:33:54.265781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:20.907 [2024-12-07 17:33:54.265970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.907 pt1 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.907 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.908 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.908 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.908 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.908 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.167 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.167 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.167 "name": "raid_bdev1", 00:17:21.167 "uuid": "d0220b40-4814-4d4d-9322-1567ca9fcbdd", 00:17:21.167 "strip_size_kb": 0, 00:17:21.167 "state": "online", 00:17:21.167 "raid_level": "raid1", 00:17:21.167 "superblock": true, 00:17:21.167 "num_base_bdevs": 2, 00:17:21.167 "num_base_bdevs_discovered": 1, 00:17:21.167 "num_base_bdevs_operational": 1, 00:17:21.167 "base_bdevs_list": [ 00:17:21.167 { 00:17:21.167 "name": null, 00:17:21.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.167 "is_configured": false, 00:17:21.167 "data_offset": 256, 00:17:21.167 "data_size": 7936 00:17:21.167 }, 00:17:21.167 { 00:17:21.167 "name": "pt2", 00:17:21.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.167 "is_configured": true, 00:17:21.167 "data_offset": 256, 00:17:21.167 "data_size": 7936 00:17:21.167 } 00:17:21.167 ] 00:17:21.167 }' 00:17:21.167 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.167 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:21.427 [2024-12-07 17:33:54.757522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' d0220b40-4814-4d4d-9322-1567ca9fcbdd '!=' d0220b40-4814-4d4d-9322-1567ca9fcbdd ']' 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86161 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86161 ']' 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86161 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.427 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86161 00:17:21.687 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.687 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.687 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86161' 00:17:21.687 killing process with pid 86161 00:17:21.687 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86161 00:17:21.687 [2024-12-07 17:33:54.830875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.687 [2024-12-07 17:33:54.830979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.687 [2024-12-07 17:33:54.831025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.687 [2024-12-07 17:33:54.831038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:21.687 17:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86161 00:17:21.687 [2024-12-07 17:33:55.033974] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.064 17:33:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:23.064 00:17:23.064 real 0m5.954s 00:17:23.064 user 0m8.989s 00:17:23.064 sys 0m1.080s 00:17:23.064 17:33:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.064 ************************************ 00:17:23.064 END TEST raid_superblock_test_4k 00:17:23.064 ************************************ 00:17:23.064 17:33:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 17:33:56 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:23.064 17:33:56 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:23.064 17:33:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:23.064 17:33:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.064 17:33:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 ************************************ 00:17:23.064 START TEST raid_rebuild_test_sb_4k 00:17:23.064 ************************************ 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86486 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86486 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86486 ']' 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.064 17:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 [2024-12-07 17:33:56.301066] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:23.064 [2024-12-07 17:33:56.301284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:23.064 Zero copy mechanism will not be used. 00:17:23.064 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86486 ] 00:17:23.324 [2024-12-07 17:33:56.478609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.324 [2024-12-07 17:33:56.585427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.583 [2024-12-07 17:33:56.781149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.583 [2024-12-07 17:33:56.781236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.843 BaseBdev1_malloc 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.843 [2024-12-07 17:33:57.170250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:23.843 [2024-12-07 17:33:57.170314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.843 [2024-12-07 17:33:57.170335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:23.843 [2024-12-07 17:33:57.170346] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.843 [2024-12-07 17:33:57.172439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.843 [2024-12-07 17:33:57.172480] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.843 BaseBdev1 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.843 BaseBdev2_malloc 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.843 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.103 [2024-12-07 17:33:57.226194] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:24.103 [2024-12-07 17:33:57.226268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.103 [2024-12-07 17:33:57.226291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:24.103 [2024-12-07 17:33:57.226302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.103 [2024-12-07 17:33:57.228403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.103 [2024-12-07 17:33:57.228441] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:24.103 BaseBdev2 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.103 spare_malloc 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.103 spare_delay 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.103 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.103 [2024-12-07 17:33:57.325223] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:24.104 [2024-12-07 17:33:57.325280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.104 [2024-12-07 17:33:57.325298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:24.104 [2024-12-07 17:33:57.325308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.104 [2024-12-07 17:33:57.327342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.104 [2024-12-07 17:33:57.327382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:24.104 spare 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.104 [2024-12-07 17:33:57.337272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.104 [2024-12-07 17:33:57.338992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.104 [2024-12-07 17:33:57.339173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:24.104 [2024-12-07 17:33:57.339188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:24.104 [2024-12-07 17:33:57.339413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:24.104 [2024-12-07 17:33:57.339623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:24.104 [2024-12-07 17:33:57.339633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:24.104 [2024-12-07 17:33:57.339790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.104 "name": "raid_bdev1", 00:17:24.104 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:24.104 "strip_size_kb": 0, 00:17:24.104 "state": "online", 00:17:24.104 "raid_level": "raid1", 00:17:24.104 "superblock": true, 00:17:24.104 "num_base_bdevs": 2, 00:17:24.104 "num_base_bdevs_discovered": 2, 00:17:24.104 "num_base_bdevs_operational": 2, 00:17:24.104 "base_bdevs_list": [ 00:17:24.104 { 00:17:24.104 "name": "BaseBdev1", 00:17:24.104 "uuid": "a1dfbca1-b64e-51bd-ae80-b56e572a8cca", 00:17:24.104 "is_configured": true, 00:17:24.104 "data_offset": 256, 00:17:24.104 "data_size": 7936 00:17:24.104 }, 00:17:24.104 { 00:17:24.104 "name": "BaseBdev2", 00:17:24.104 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:24.104 "is_configured": true, 00:17:24.104 "data_offset": 256, 00:17:24.104 "data_size": 7936 00:17:24.104 } 00:17:24.104 ] 00:17:24.104 }' 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.104 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.672 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:24.673 [2024-12-07 17:33:57.820715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:24.673 17:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:24.933 [2024-12-07 17:33:58.092075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:24.933 /dev/nbd0 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.933 1+0 records in 00:17:24.933 1+0 records out 00:17:24.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333839 s, 12.3 MB/s 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:24.933 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:25.503 7936+0 records in 00:17:25.503 7936+0 records out 00:17:25.503 32505856 bytes (33 MB, 31 MiB) copied, 0.680081 s, 47.8 MB/s 00:17:25.503 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:25.503 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.503 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:25.503 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.503 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:25.503 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.503 17:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.763 [2024-12-07 17:33:59.072285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.763 [2024-12-07 17:33:59.096370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.763 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.021 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.021 "name": "raid_bdev1", 00:17:26.021 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:26.021 "strip_size_kb": 0, 00:17:26.021 "state": "online", 00:17:26.021 "raid_level": "raid1", 00:17:26.021 "superblock": true, 00:17:26.021 "num_base_bdevs": 2, 00:17:26.021 "num_base_bdevs_discovered": 1, 00:17:26.021 "num_base_bdevs_operational": 1, 00:17:26.021 "base_bdevs_list": [ 00:17:26.021 { 00:17:26.021 "name": null, 00:17:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.021 "is_configured": false, 00:17:26.021 "data_offset": 0, 00:17:26.021 "data_size": 7936 00:17:26.021 }, 00:17:26.021 { 00:17:26.021 "name": "BaseBdev2", 00:17:26.021 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:26.021 "is_configured": true, 00:17:26.021 "data_offset": 256, 00:17:26.021 "data_size": 7936 00:17:26.021 } 00:17:26.021 ] 00:17:26.021 }' 00:17:26.021 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.021 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.281 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.281 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.281 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.281 [2024-12-07 17:33:59.563705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.281 [2024-12-07 17:33:59.582528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:26.281 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.281 17:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:26.281 [2024-12-07 17:33:59.584848] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.219 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.219 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.219 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.219 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.219 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.219 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.219 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.219 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.219 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.479 "name": "raid_bdev1", 00:17:27.479 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:27.479 "strip_size_kb": 0, 00:17:27.479 "state": "online", 00:17:27.479 "raid_level": "raid1", 00:17:27.479 "superblock": true, 00:17:27.479 "num_base_bdevs": 2, 00:17:27.479 "num_base_bdevs_discovered": 2, 00:17:27.479 "num_base_bdevs_operational": 2, 00:17:27.479 "process": { 00:17:27.479 "type": "rebuild", 00:17:27.479 "target": "spare", 00:17:27.479 "progress": { 00:17:27.479 "blocks": 2560, 00:17:27.479 "percent": 32 00:17:27.479 } 00:17:27.479 }, 00:17:27.479 "base_bdevs_list": [ 00:17:27.479 { 00:17:27.479 "name": "spare", 00:17:27.479 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:27.479 "is_configured": true, 00:17:27.479 "data_offset": 256, 00:17:27.479 "data_size": 7936 00:17:27.479 }, 00:17:27.479 { 00:17:27.479 "name": "BaseBdev2", 00:17:27.479 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:27.479 "is_configured": true, 00:17:27.479 "data_offset": 256, 00:17:27.479 "data_size": 7936 00:17:27.479 } 00:17:27.479 ] 00:17:27.479 }' 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.479 [2024-12-07 17:34:00.752251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.479 [2024-12-07 17:34:00.793888] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:27.479 [2024-12-07 17:34:00.793984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.479 [2024-12-07 17:34:00.794003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.479 [2024-12-07 17:34:00.794016] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.479 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.737 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.737 "name": "raid_bdev1", 00:17:27.737 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:27.737 "strip_size_kb": 0, 00:17:27.737 "state": "online", 00:17:27.737 "raid_level": "raid1", 00:17:27.737 "superblock": true, 00:17:27.737 "num_base_bdevs": 2, 00:17:27.737 "num_base_bdevs_discovered": 1, 00:17:27.737 "num_base_bdevs_operational": 1, 00:17:27.737 "base_bdevs_list": [ 00:17:27.737 { 00:17:27.737 "name": null, 00:17:27.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.737 "is_configured": false, 00:17:27.737 "data_offset": 0, 00:17:27.737 "data_size": 7936 00:17:27.737 }, 00:17:27.737 { 00:17:27.737 "name": "BaseBdev2", 00:17:27.737 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:27.737 "is_configured": true, 00:17:27.737 "data_offset": 256, 00:17:27.737 "data_size": 7936 00:17:27.737 } 00:17:27.737 ] 00:17:27.737 }' 00:17:27.737 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.737 17:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.995 "name": "raid_bdev1", 00:17:27.995 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:27.995 "strip_size_kb": 0, 00:17:27.995 "state": "online", 00:17:27.995 "raid_level": "raid1", 00:17:27.995 "superblock": true, 00:17:27.995 "num_base_bdevs": 2, 00:17:27.995 "num_base_bdevs_discovered": 1, 00:17:27.995 "num_base_bdevs_operational": 1, 00:17:27.995 "base_bdevs_list": [ 00:17:27.995 { 00:17:27.995 "name": null, 00:17:27.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.995 "is_configured": false, 00:17:27.995 "data_offset": 0, 00:17:27.995 "data_size": 7936 00:17:27.995 }, 00:17:27.995 { 00:17:27.995 "name": "BaseBdev2", 00:17:27.995 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:27.995 "is_configured": true, 00:17:27.995 "data_offset": 256, 00:17:27.995 "data_size": 7936 00:17:27.995 } 00:17:27.995 ] 00:17:27.995 }' 00:17:27.995 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.254 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.254 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.254 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.254 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:28.254 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.254 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.254 [2024-12-07 17:34:01.461206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.254 [2024-12-07 17:34:01.478428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:28.254 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.254 17:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:28.254 [2024-12-07 17:34:01.480614] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.239 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.239 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.240 "name": "raid_bdev1", 00:17:29.240 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:29.240 "strip_size_kb": 0, 00:17:29.240 "state": "online", 00:17:29.240 "raid_level": "raid1", 00:17:29.240 "superblock": true, 00:17:29.240 "num_base_bdevs": 2, 00:17:29.240 "num_base_bdevs_discovered": 2, 00:17:29.240 "num_base_bdevs_operational": 2, 00:17:29.240 "process": { 00:17:29.240 "type": "rebuild", 00:17:29.240 "target": "spare", 00:17:29.240 "progress": { 00:17:29.240 "blocks": 2560, 00:17:29.240 "percent": 32 00:17:29.240 } 00:17:29.240 }, 00:17:29.240 "base_bdevs_list": [ 00:17:29.240 { 00:17:29.240 "name": "spare", 00:17:29.240 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:29.240 "is_configured": true, 00:17:29.240 "data_offset": 256, 00:17:29.240 "data_size": 7936 00:17:29.240 }, 00:17:29.240 { 00:17:29.240 "name": "BaseBdev2", 00:17:29.240 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:29.240 "is_configured": true, 00:17:29.240 "data_offset": 256, 00:17:29.240 "data_size": 7936 00:17:29.240 } 00:17:29.240 ] 00:17:29.240 }' 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:29.240 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=673 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.240 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.500 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.500 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.500 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.500 "name": "raid_bdev1", 00:17:29.500 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:29.500 "strip_size_kb": 0, 00:17:29.500 "state": "online", 00:17:29.500 "raid_level": "raid1", 00:17:29.500 "superblock": true, 00:17:29.500 "num_base_bdevs": 2, 00:17:29.500 "num_base_bdevs_discovered": 2, 00:17:29.500 "num_base_bdevs_operational": 2, 00:17:29.500 "process": { 00:17:29.500 "type": "rebuild", 00:17:29.500 "target": "spare", 00:17:29.500 "progress": { 00:17:29.500 "blocks": 2816, 00:17:29.500 "percent": 35 00:17:29.500 } 00:17:29.500 }, 00:17:29.500 "base_bdevs_list": [ 00:17:29.500 { 00:17:29.500 "name": "spare", 00:17:29.500 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:29.500 "is_configured": true, 00:17:29.500 "data_offset": 256, 00:17:29.500 "data_size": 7936 00:17:29.500 }, 00:17:29.500 { 00:17:29.500 "name": "BaseBdev2", 00:17:29.500 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:29.500 "is_configured": true, 00:17:29.500 "data_offset": 256, 00:17:29.500 "data_size": 7936 00:17:29.500 } 00:17:29.500 ] 00:17:29.500 }' 00:17:29.500 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.500 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.500 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.500 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.500 17:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.441 "name": "raid_bdev1", 00:17:30.441 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:30.441 "strip_size_kb": 0, 00:17:30.441 "state": "online", 00:17:30.441 "raid_level": "raid1", 00:17:30.441 "superblock": true, 00:17:30.441 "num_base_bdevs": 2, 00:17:30.441 "num_base_bdevs_discovered": 2, 00:17:30.441 "num_base_bdevs_operational": 2, 00:17:30.441 "process": { 00:17:30.441 "type": "rebuild", 00:17:30.441 "target": "spare", 00:17:30.441 "progress": { 00:17:30.441 "blocks": 5632, 00:17:30.441 "percent": 70 00:17:30.441 } 00:17:30.441 }, 00:17:30.441 "base_bdevs_list": [ 00:17:30.441 { 00:17:30.441 "name": "spare", 00:17:30.441 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:30.441 "is_configured": true, 00:17:30.441 "data_offset": 256, 00:17:30.441 "data_size": 7936 00:17:30.441 }, 00:17:30.441 { 00:17:30.441 "name": "BaseBdev2", 00:17:30.441 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:30.441 "is_configured": true, 00:17:30.441 "data_offset": 256, 00:17:30.441 "data_size": 7936 00:17:30.441 } 00:17:30.441 ] 00:17:30.441 }' 00:17:30.441 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.702 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.702 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.702 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.702 17:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.274 [2024-12-07 17:34:04.603394] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:31.274 [2024-12-07 17:34:04.603590] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:31.274 [2024-12-07 17:34:04.603743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.534 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.534 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.534 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.534 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.534 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.534 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.534 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.534 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.534 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.535 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.796 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.796 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.796 "name": "raid_bdev1", 00:17:31.796 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:31.796 "strip_size_kb": 0, 00:17:31.796 "state": "online", 00:17:31.796 "raid_level": "raid1", 00:17:31.796 "superblock": true, 00:17:31.796 "num_base_bdevs": 2, 00:17:31.796 "num_base_bdevs_discovered": 2, 00:17:31.796 "num_base_bdevs_operational": 2, 00:17:31.796 "base_bdevs_list": [ 00:17:31.796 { 00:17:31.796 "name": "spare", 00:17:31.796 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:31.796 "is_configured": true, 00:17:31.796 "data_offset": 256, 00:17:31.796 "data_size": 7936 00:17:31.796 }, 00:17:31.796 { 00:17:31.796 "name": "BaseBdev2", 00:17:31.796 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:31.796 "is_configured": true, 00:17:31.796 "data_offset": 256, 00:17:31.796 "data_size": 7936 00:17:31.796 } 00:17:31.796 ] 00:17:31.796 }' 00:17:31.796 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.796 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:31.796 17:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.796 "name": "raid_bdev1", 00:17:31.796 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:31.796 "strip_size_kb": 0, 00:17:31.796 "state": "online", 00:17:31.796 "raid_level": "raid1", 00:17:31.796 "superblock": true, 00:17:31.796 "num_base_bdevs": 2, 00:17:31.796 "num_base_bdevs_discovered": 2, 00:17:31.796 "num_base_bdevs_operational": 2, 00:17:31.796 "base_bdevs_list": [ 00:17:31.796 { 00:17:31.796 "name": "spare", 00:17:31.796 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:31.796 "is_configured": true, 00:17:31.796 "data_offset": 256, 00:17:31.796 "data_size": 7936 00:17:31.796 }, 00:17:31.796 { 00:17:31.796 "name": "BaseBdev2", 00:17:31.796 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:31.796 "is_configured": true, 00:17:31.796 "data_offset": 256, 00:17:31.796 "data_size": 7936 00:17:31.796 } 00:17:31.796 ] 00:17:31.796 }' 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.796 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.057 "name": "raid_bdev1", 00:17:32.057 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:32.057 "strip_size_kb": 0, 00:17:32.057 "state": "online", 00:17:32.057 "raid_level": "raid1", 00:17:32.057 "superblock": true, 00:17:32.057 "num_base_bdevs": 2, 00:17:32.057 "num_base_bdevs_discovered": 2, 00:17:32.057 "num_base_bdevs_operational": 2, 00:17:32.057 "base_bdevs_list": [ 00:17:32.057 { 00:17:32.057 "name": "spare", 00:17:32.057 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:32.057 "is_configured": true, 00:17:32.057 "data_offset": 256, 00:17:32.057 "data_size": 7936 00:17:32.057 }, 00:17:32.057 { 00:17:32.057 "name": "BaseBdev2", 00:17:32.057 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:32.057 "is_configured": true, 00:17:32.057 "data_offset": 256, 00:17:32.057 "data_size": 7936 00:17:32.057 } 00:17:32.057 ] 00:17:32.057 }' 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.057 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.317 [2024-12-07 17:34:05.611132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.317 [2024-12-07 17:34:05.611172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.317 [2024-12-07 17:34:05.611281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.317 [2024-12-07 17:34:05.611361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.317 [2024-12-07 17:34:05.611375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.317 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:32.578 /dev/nbd0 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.578 1+0 records in 00:17:32.578 1+0 records out 00:17:32.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029559 s, 13.9 MB/s 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.578 17:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:32.838 /dev/nbd1 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.838 1+0 records in 00:17:32.838 1+0 records out 00:17:32.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413881 s, 9.9 MB/s 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.838 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:33.097 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:33.097 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.097 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.097 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.097 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:33.097 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.097 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:33.357 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:33.357 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:33.357 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:33.357 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.357 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.357 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:33.357 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:33.357 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.357 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.357 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.617 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.617 [2024-12-07 17:34:06.796325] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:33.617 [2024-12-07 17:34:06.796457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.617 [2024-12-07 17:34:06.796494] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:33.617 [2024-12-07 17:34:06.796506] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.617 [2024-12-07 17:34:06.799051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.617 [2024-12-07 17:34:06.799092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:33.617 [2024-12-07 17:34:06.799208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:33.617 [2024-12-07 17:34:06.799273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.618 [2024-12-07 17:34:06.799433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:33.618 spare 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.618 [2024-12-07 17:34:06.899387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:33.618 [2024-12-07 17:34:06.899421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:33.618 [2024-12-07 17:34:06.899710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:33.618 [2024-12-07 17:34:06.899905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:33.618 [2024-12-07 17:34:06.899916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:33.618 [2024-12-07 17:34:06.900161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.618 "name": "raid_bdev1", 00:17:33.618 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:33.618 "strip_size_kb": 0, 00:17:33.618 "state": "online", 00:17:33.618 "raid_level": "raid1", 00:17:33.618 "superblock": true, 00:17:33.618 "num_base_bdevs": 2, 00:17:33.618 "num_base_bdevs_discovered": 2, 00:17:33.618 "num_base_bdevs_operational": 2, 00:17:33.618 "base_bdevs_list": [ 00:17:33.618 { 00:17:33.618 "name": "spare", 00:17:33.618 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:33.618 "is_configured": true, 00:17:33.618 "data_offset": 256, 00:17:33.618 "data_size": 7936 00:17:33.618 }, 00:17:33.618 { 00:17:33.618 "name": "BaseBdev2", 00:17:33.618 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:33.618 "is_configured": true, 00:17:33.618 "data_offset": 256, 00:17:33.618 "data_size": 7936 00:17:33.618 } 00:17:33.618 ] 00:17:33.618 }' 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.618 17:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.187 "name": "raid_bdev1", 00:17:34.187 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:34.187 "strip_size_kb": 0, 00:17:34.187 "state": "online", 00:17:34.187 "raid_level": "raid1", 00:17:34.187 "superblock": true, 00:17:34.187 "num_base_bdevs": 2, 00:17:34.187 "num_base_bdevs_discovered": 2, 00:17:34.187 "num_base_bdevs_operational": 2, 00:17:34.187 "base_bdevs_list": [ 00:17:34.187 { 00:17:34.187 "name": "spare", 00:17:34.187 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:34.187 "is_configured": true, 00:17:34.187 "data_offset": 256, 00:17:34.187 "data_size": 7936 00:17:34.187 }, 00:17:34.187 { 00:17:34.187 "name": "BaseBdev2", 00:17:34.187 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:34.187 "is_configured": true, 00:17:34.187 "data_offset": 256, 00:17:34.187 "data_size": 7936 00:17:34.187 } 00:17:34.187 ] 00:17:34.187 }' 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.187 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.446 [2024-12-07 17:34:07.579291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.446 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.446 "name": "raid_bdev1", 00:17:34.446 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:34.446 "strip_size_kb": 0, 00:17:34.446 "state": "online", 00:17:34.446 "raid_level": "raid1", 00:17:34.446 "superblock": true, 00:17:34.446 "num_base_bdevs": 2, 00:17:34.446 "num_base_bdevs_discovered": 1, 00:17:34.447 "num_base_bdevs_operational": 1, 00:17:34.447 "base_bdevs_list": [ 00:17:34.447 { 00:17:34.447 "name": null, 00:17:34.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.447 "is_configured": false, 00:17:34.447 "data_offset": 0, 00:17:34.447 "data_size": 7936 00:17:34.447 }, 00:17:34.447 { 00:17:34.447 "name": "BaseBdev2", 00:17:34.447 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:34.447 "is_configured": true, 00:17:34.447 "data_offset": 256, 00:17:34.447 "data_size": 7936 00:17:34.447 } 00:17:34.447 ] 00:17:34.447 }' 00:17:34.447 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.447 17:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.706 17:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:34.706 17:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.706 17:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.706 [2024-12-07 17:34:08.054524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.706 [2024-12-07 17:34:08.054834] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:34.706 [2024-12-07 17:34:08.054916] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:34.706 [2024-12-07 17:34:08.055006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.706 [2024-12-07 17:34:08.072505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:34.707 17:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.707 17:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:34.707 [2024-12-07 17:34:08.074748] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.108 "name": "raid_bdev1", 00:17:36.108 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:36.108 "strip_size_kb": 0, 00:17:36.108 "state": "online", 00:17:36.108 "raid_level": "raid1", 00:17:36.108 "superblock": true, 00:17:36.108 "num_base_bdevs": 2, 00:17:36.108 "num_base_bdevs_discovered": 2, 00:17:36.108 "num_base_bdevs_operational": 2, 00:17:36.108 "process": { 00:17:36.108 "type": "rebuild", 00:17:36.108 "target": "spare", 00:17:36.108 "progress": { 00:17:36.108 "blocks": 2560, 00:17:36.108 "percent": 32 00:17:36.108 } 00:17:36.108 }, 00:17:36.108 "base_bdevs_list": [ 00:17:36.108 { 00:17:36.108 "name": "spare", 00:17:36.108 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:36.108 "is_configured": true, 00:17:36.108 "data_offset": 256, 00:17:36.108 "data_size": 7936 00:17:36.108 }, 00:17:36.108 { 00:17:36.108 "name": "BaseBdev2", 00:17:36.108 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:36.108 "is_configured": true, 00:17:36.108 "data_offset": 256, 00:17:36.108 "data_size": 7936 00:17:36.108 } 00:17:36.108 ] 00:17:36.108 }' 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.108 [2024-12-07 17:34:09.234323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.108 [2024-12-07 17:34:09.283895] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.108 [2024-12-07 17:34:09.284077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.108 [2024-12-07 17:34:09.284099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.108 [2024-12-07 17:34:09.284112] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.108 "name": "raid_bdev1", 00:17:36.108 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:36.108 "strip_size_kb": 0, 00:17:36.108 "state": "online", 00:17:36.108 "raid_level": "raid1", 00:17:36.108 "superblock": true, 00:17:36.108 "num_base_bdevs": 2, 00:17:36.108 "num_base_bdevs_discovered": 1, 00:17:36.108 "num_base_bdevs_operational": 1, 00:17:36.108 "base_bdevs_list": [ 00:17:36.108 { 00:17:36.108 "name": null, 00:17:36.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.108 "is_configured": false, 00:17:36.108 "data_offset": 0, 00:17:36.108 "data_size": 7936 00:17:36.108 }, 00:17:36.108 { 00:17:36.108 "name": "BaseBdev2", 00:17:36.108 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:36.108 "is_configured": true, 00:17:36.108 "data_offset": 256, 00:17:36.108 "data_size": 7936 00:17:36.108 } 00:17:36.108 ] 00:17:36.108 }' 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.108 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.678 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:36.678 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.678 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.678 [2024-12-07 17:34:09.777852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:36.678 [2024-12-07 17:34:09.778046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.678 [2024-12-07 17:34:09.778103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:36.678 [2024-12-07 17:34:09.778154] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.678 [2024-12-07 17:34:09.778762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.678 [2024-12-07 17:34:09.778844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:36.678 [2024-12-07 17:34:09.779015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:36.678 [2024-12-07 17:34:09.779069] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:36.678 [2024-12-07 17:34:09.779121] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:36.678 [2024-12-07 17:34:09.779219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.678 [2024-12-07 17:34:09.796800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:36.678 spare 00:17:36.678 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.679 17:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:36.679 [2024-12-07 17:34:09.799055] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.619 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.619 "name": "raid_bdev1", 00:17:37.619 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:37.619 "strip_size_kb": 0, 00:17:37.619 "state": "online", 00:17:37.619 "raid_level": "raid1", 00:17:37.619 "superblock": true, 00:17:37.619 "num_base_bdevs": 2, 00:17:37.619 "num_base_bdevs_discovered": 2, 00:17:37.619 "num_base_bdevs_operational": 2, 00:17:37.619 "process": { 00:17:37.619 "type": "rebuild", 00:17:37.619 "target": "spare", 00:17:37.619 "progress": { 00:17:37.619 "blocks": 2560, 00:17:37.619 "percent": 32 00:17:37.619 } 00:17:37.619 }, 00:17:37.619 "base_bdevs_list": [ 00:17:37.619 { 00:17:37.619 "name": "spare", 00:17:37.619 "uuid": "2fc237bc-d740-5fbc-96a3-479c72c2cf0b", 00:17:37.619 "is_configured": true, 00:17:37.619 "data_offset": 256, 00:17:37.619 "data_size": 7936 00:17:37.620 }, 00:17:37.620 { 00:17:37.620 "name": "BaseBdev2", 00:17:37.620 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:37.620 "is_configured": true, 00:17:37.620 "data_offset": 256, 00:17:37.620 "data_size": 7936 00:17:37.620 } 00:17:37.620 ] 00:17:37.620 }' 00:17:37.620 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.620 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.620 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.620 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.620 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:37.620 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.620 17:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.620 [2024-12-07 17:34:10.961991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.880 [2024-12-07 17:34:11.005833] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.880 [2024-12-07 17:34:11.005907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.880 [2024-12-07 17:34:11.005925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.880 [2024-12-07 17:34:11.005932] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.880 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.880 "name": "raid_bdev1", 00:17:37.880 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:37.880 "strip_size_kb": 0, 00:17:37.880 "state": "online", 00:17:37.880 "raid_level": "raid1", 00:17:37.880 "superblock": true, 00:17:37.881 "num_base_bdevs": 2, 00:17:37.881 "num_base_bdevs_discovered": 1, 00:17:37.881 "num_base_bdevs_operational": 1, 00:17:37.881 "base_bdevs_list": [ 00:17:37.881 { 00:17:37.881 "name": null, 00:17:37.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.881 "is_configured": false, 00:17:37.881 "data_offset": 0, 00:17:37.881 "data_size": 7936 00:17:37.881 }, 00:17:37.881 { 00:17:37.881 "name": "BaseBdev2", 00:17:37.881 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:37.881 "is_configured": true, 00:17:37.881 "data_offset": 256, 00:17:37.881 "data_size": 7936 00:17:37.881 } 00:17:37.881 ] 00:17:37.881 }' 00:17:37.881 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.881 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.142 "name": "raid_bdev1", 00:17:38.142 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:38.142 "strip_size_kb": 0, 00:17:38.142 "state": "online", 00:17:38.142 "raid_level": "raid1", 00:17:38.142 "superblock": true, 00:17:38.142 "num_base_bdevs": 2, 00:17:38.142 "num_base_bdevs_discovered": 1, 00:17:38.142 "num_base_bdevs_operational": 1, 00:17:38.142 "base_bdevs_list": [ 00:17:38.142 { 00:17:38.142 "name": null, 00:17:38.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.142 "is_configured": false, 00:17:38.142 "data_offset": 0, 00:17:38.142 "data_size": 7936 00:17:38.142 }, 00:17:38.142 { 00:17:38.142 "name": "BaseBdev2", 00:17:38.142 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:38.142 "is_configured": true, 00:17:38.142 "data_offset": 256, 00:17:38.142 "data_size": 7936 00:17:38.142 } 00:17:38.142 ] 00:17:38.142 }' 00:17:38.142 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.403 [2024-12-07 17:34:11.591189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:38.403 [2024-12-07 17:34:11.591255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.403 [2024-12-07 17:34:11.591286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:38.403 [2024-12-07 17:34:11.591306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.403 [2024-12-07 17:34:11.591832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.403 [2024-12-07 17:34:11.591866] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.403 [2024-12-07 17:34:11.591971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:38.403 [2024-12-07 17:34:11.591995] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:38.403 [2024-12-07 17:34:11.592006] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:38.403 [2024-12-07 17:34:11.592016] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:38.403 BaseBdev1 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.403 17:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.345 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.345 "name": "raid_bdev1", 00:17:39.345 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:39.345 "strip_size_kb": 0, 00:17:39.345 "state": "online", 00:17:39.345 "raid_level": "raid1", 00:17:39.345 "superblock": true, 00:17:39.345 "num_base_bdevs": 2, 00:17:39.345 "num_base_bdevs_discovered": 1, 00:17:39.345 "num_base_bdevs_operational": 1, 00:17:39.345 "base_bdevs_list": [ 00:17:39.345 { 00:17:39.345 "name": null, 00:17:39.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.345 "is_configured": false, 00:17:39.345 "data_offset": 0, 00:17:39.345 "data_size": 7936 00:17:39.345 }, 00:17:39.345 { 00:17:39.346 "name": "BaseBdev2", 00:17:39.346 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:39.346 "is_configured": true, 00:17:39.346 "data_offset": 256, 00:17:39.346 "data_size": 7936 00:17:39.346 } 00:17:39.346 ] 00:17:39.346 }' 00:17:39.346 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.346 17:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.916 "name": "raid_bdev1", 00:17:39.916 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:39.916 "strip_size_kb": 0, 00:17:39.916 "state": "online", 00:17:39.916 "raid_level": "raid1", 00:17:39.916 "superblock": true, 00:17:39.916 "num_base_bdevs": 2, 00:17:39.916 "num_base_bdevs_discovered": 1, 00:17:39.916 "num_base_bdevs_operational": 1, 00:17:39.916 "base_bdevs_list": [ 00:17:39.916 { 00:17:39.916 "name": null, 00:17:39.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.916 "is_configured": false, 00:17:39.916 "data_offset": 0, 00:17:39.916 "data_size": 7936 00:17:39.916 }, 00:17:39.916 { 00:17:39.916 "name": "BaseBdev2", 00:17:39.916 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:39.916 "is_configured": true, 00:17:39.916 "data_offset": 256, 00:17:39.916 "data_size": 7936 00:17:39.916 } 00:17:39.916 ] 00:17:39.916 }' 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.916 [2024-12-07 17:34:13.192692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.916 [2024-12-07 17:34:13.192876] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:39.916 [2024-12-07 17:34:13.192900] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:39.916 request: 00:17:39.916 { 00:17:39.916 "base_bdev": "BaseBdev1", 00:17:39.916 "raid_bdev": "raid_bdev1", 00:17:39.916 "method": "bdev_raid_add_base_bdev", 00:17:39.916 "req_id": 1 00:17:39.916 } 00:17:39.916 Got JSON-RPC error response 00:17:39.916 response: 00:17:39.916 { 00:17:39.916 "code": -22, 00:17:39.916 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:39.916 } 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.916 17:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.858 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.117 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.117 "name": "raid_bdev1", 00:17:41.117 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:41.117 "strip_size_kb": 0, 00:17:41.117 "state": "online", 00:17:41.117 "raid_level": "raid1", 00:17:41.117 "superblock": true, 00:17:41.117 "num_base_bdevs": 2, 00:17:41.117 "num_base_bdevs_discovered": 1, 00:17:41.117 "num_base_bdevs_operational": 1, 00:17:41.117 "base_bdevs_list": [ 00:17:41.117 { 00:17:41.117 "name": null, 00:17:41.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.117 "is_configured": false, 00:17:41.117 "data_offset": 0, 00:17:41.117 "data_size": 7936 00:17:41.117 }, 00:17:41.117 { 00:17:41.117 "name": "BaseBdev2", 00:17:41.117 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:41.117 "is_configured": true, 00:17:41.117 "data_offset": 256, 00:17:41.117 "data_size": 7936 00:17:41.117 } 00:17:41.117 ] 00:17:41.117 }' 00:17:41.117 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.117 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.378 "name": "raid_bdev1", 00:17:41.378 "uuid": "c3e5df2d-bde3-4d2d-a9dd-9e6ad1e67529", 00:17:41.378 "strip_size_kb": 0, 00:17:41.378 "state": "online", 00:17:41.378 "raid_level": "raid1", 00:17:41.378 "superblock": true, 00:17:41.378 "num_base_bdevs": 2, 00:17:41.378 "num_base_bdevs_discovered": 1, 00:17:41.378 "num_base_bdevs_operational": 1, 00:17:41.378 "base_bdevs_list": [ 00:17:41.378 { 00:17:41.378 "name": null, 00:17:41.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.378 "is_configured": false, 00:17:41.378 "data_offset": 0, 00:17:41.378 "data_size": 7936 00:17:41.378 }, 00:17:41.378 { 00:17:41.378 "name": "BaseBdev2", 00:17:41.378 "uuid": "3043381b-d085-58cd-b695-6659adfbbb5b", 00:17:41.378 "is_configured": true, 00:17:41.378 "data_offset": 256, 00:17:41.378 "data_size": 7936 00:17:41.378 } 00:17:41.378 ] 00:17:41.378 }' 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86486 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86486 ']' 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86486 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86486 00:17:41.378 killing process with pid 86486 00:17:41.378 Received shutdown signal, test time was about 60.000000 seconds 00:17:41.378 00:17:41.378 Latency(us) 00:17:41.378 [2024-12-07T17:34:14.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.378 [2024-12-07T17:34:14.760Z] =================================================================================================================== 00:17:41.378 [2024-12-07T17:34:14.760Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86486' 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86486 00:17:41.378 [2024-12-07 17:34:14.752713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:41.378 17:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86486 00:17:41.378 [2024-12-07 17:34:14.752864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.378 [2024-12-07 17:34:14.752928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.378 [2024-12-07 17:34:14.752939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:41.947 [2024-12-07 17:34:15.038454] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:42.885 17:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:42.885 00:17:42.885 real 0m19.879s 00:17:42.885 user 0m25.877s 00:17:42.885 sys 0m2.677s 00:17:42.885 17:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.885 17:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.885 ************************************ 00:17:42.885 END TEST raid_rebuild_test_sb_4k 00:17:42.885 ************************************ 00:17:42.885 17:34:16 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:42.885 17:34:16 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:42.885 17:34:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:42.885 17:34:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.885 17:34:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:42.885 ************************************ 00:17:42.885 START TEST raid_state_function_test_sb_md_separate 00:17:42.885 ************************************ 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:42.885 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87178 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87178' 00:17:42.886 Process raid pid: 87178 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87178 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87178 ']' 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.886 17:34:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.886 [2024-12-07 17:34:16.242319] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:42.886 [2024-12-07 17:34:16.242441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.146 [2024-12-07 17:34:16.416542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.146 [2024-12-07 17:34:16.519642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.406 [2024-12-07 17:34:16.719136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.406 [2024-12-07 17:34:16.719176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.975 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.975 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.976 [2024-12-07 17:34:17.062236] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.976 [2024-12-07 17:34:17.062294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.976 [2024-12-07 17:34:17.062304] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.976 [2024-12-07 17:34:17.062330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.976 "name": "Existed_Raid", 00:17:43.976 "uuid": "5507b2f8-9081-4250-b25d-702383405aae", 00:17:43.976 "strip_size_kb": 0, 00:17:43.976 "state": "configuring", 00:17:43.976 "raid_level": "raid1", 00:17:43.976 "superblock": true, 00:17:43.976 "num_base_bdevs": 2, 00:17:43.976 "num_base_bdevs_discovered": 0, 00:17:43.976 "num_base_bdevs_operational": 2, 00:17:43.976 "base_bdevs_list": [ 00:17:43.976 { 00:17:43.976 "name": "BaseBdev1", 00:17:43.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.976 "is_configured": false, 00:17:43.976 "data_offset": 0, 00:17:43.976 "data_size": 0 00:17:43.976 }, 00:17:43.976 { 00:17:43.976 "name": "BaseBdev2", 00:17:43.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.976 "is_configured": false, 00:17:43.976 "data_offset": 0, 00:17:43.976 "data_size": 0 00:17:43.976 } 00:17:43.976 ] 00:17:43.976 }' 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.976 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.237 [2024-12-07 17:34:17.509413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.237 [2024-12-07 17:34:17.509452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.237 [2024-12-07 17:34:17.521388] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.237 [2024-12-07 17:34:17.521429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.237 [2024-12-07 17:34:17.521438] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.237 [2024-12-07 17:34:17.521465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.237 [2024-12-07 17:34:17.570478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.237 BaseBdev1 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.237 [ 00:17:44.237 { 00:17:44.237 "name": "BaseBdev1", 00:17:44.237 "aliases": [ 00:17:44.237 "45ee4293-4cbe-4429-9e7d-ddf2e1721e6e" 00:17:44.237 ], 00:17:44.237 "product_name": "Malloc disk", 00:17:44.237 "block_size": 4096, 00:17:44.237 "num_blocks": 8192, 00:17:44.237 "uuid": "45ee4293-4cbe-4429-9e7d-ddf2e1721e6e", 00:17:44.237 "md_size": 32, 00:17:44.237 "md_interleave": false, 00:17:44.237 "dif_type": 0, 00:17:44.237 "assigned_rate_limits": { 00:17:44.237 "rw_ios_per_sec": 0, 00:17:44.237 "rw_mbytes_per_sec": 0, 00:17:44.237 "r_mbytes_per_sec": 0, 00:17:44.237 "w_mbytes_per_sec": 0 00:17:44.237 }, 00:17:44.237 "claimed": true, 00:17:44.237 "claim_type": "exclusive_write", 00:17:44.237 "zoned": false, 00:17:44.237 "supported_io_types": { 00:17:44.237 "read": true, 00:17:44.237 "write": true, 00:17:44.237 "unmap": true, 00:17:44.237 "flush": true, 00:17:44.237 "reset": true, 00:17:44.237 "nvme_admin": false, 00:17:44.237 "nvme_io": false, 00:17:44.237 "nvme_io_md": false, 00:17:44.237 "write_zeroes": true, 00:17:44.237 "zcopy": true, 00:17:44.237 "get_zone_info": false, 00:17:44.237 "zone_management": false, 00:17:44.237 "zone_append": false, 00:17:44.237 "compare": false, 00:17:44.237 "compare_and_write": false, 00:17:44.237 "abort": true, 00:17:44.237 "seek_hole": false, 00:17:44.237 "seek_data": false, 00:17:44.237 "copy": true, 00:17:44.237 "nvme_iov_md": false 00:17:44.237 }, 00:17:44.237 "memory_domains": [ 00:17:44.237 { 00:17:44.237 "dma_device_id": "system", 00:17:44.237 "dma_device_type": 1 00:17:44.237 }, 00:17:44.237 { 00:17:44.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.237 "dma_device_type": 2 00:17:44.237 } 00:17:44.237 ], 00:17:44.237 "driver_specific": {} 00:17:44.237 } 00:17:44.237 ] 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.237 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.497 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.497 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.497 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.497 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.497 "name": "Existed_Raid", 00:17:44.497 "uuid": "7b54ae4f-5edc-4cc3-8692-bdadcafcbee1", 00:17:44.497 "strip_size_kb": 0, 00:17:44.497 "state": "configuring", 00:17:44.497 "raid_level": "raid1", 00:17:44.497 "superblock": true, 00:17:44.497 "num_base_bdevs": 2, 00:17:44.497 "num_base_bdevs_discovered": 1, 00:17:44.497 "num_base_bdevs_operational": 2, 00:17:44.497 "base_bdevs_list": [ 00:17:44.497 { 00:17:44.497 "name": "BaseBdev1", 00:17:44.497 "uuid": "45ee4293-4cbe-4429-9e7d-ddf2e1721e6e", 00:17:44.497 "is_configured": true, 00:17:44.497 "data_offset": 256, 00:17:44.497 "data_size": 7936 00:17:44.497 }, 00:17:44.497 { 00:17:44.497 "name": "BaseBdev2", 00:17:44.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.497 "is_configured": false, 00:17:44.497 "data_offset": 0, 00:17:44.497 "data_size": 0 00:17:44.497 } 00:17:44.497 ] 00:17:44.497 }' 00:17:44.497 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.497 17:34:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.758 [2024-12-07 17:34:18.065689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.758 [2024-12-07 17:34:18.065734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.758 [2024-12-07 17:34:18.077714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.758 [2024-12-07 17:34:18.079491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.758 [2024-12-07 17:34:18.079543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.758 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.758 "name": "Existed_Raid", 00:17:44.758 "uuid": "da153db4-436c-44d0-a43a-57276ea4c9dc", 00:17:44.758 "strip_size_kb": 0, 00:17:44.758 "state": "configuring", 00:17:44.758 "raid_level": "raid1", 00:17:44.758 "superblock": true, 00:17:44.758 "num_base_bdevs": 2, 00:17:44.758 "num_base_bdevs_discovered": 1, 00:17:44.758 "num_base_bdevs_operational": 2, 00:17:44.758 "base_bdevs_list": [ 00:17:44.758 { 00:17:44.758 "name": "BaseBdev1", 00:17:44.758 "uuid": "45ee4293-4cbe-4429-9e7d-ddf2e1721e6e", 00:17:44.758 "is_configured": true, 00:17:44.758 "data_offset": 256, 00:17:44.758 "data_size": 7936 00:17:44.758 }, 00:17:44.758 { 00:17:44.758 "name": "BaseBdev2", 00:17:44.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.758 "is_configured": false, 00:17:44.759 "data_offset": 0, 00:17:44.759 "data_size": 0 00:17:44.759 } 00:17:44.759 ] 00:17:44.759 }' 00:17:44.759 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.759 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.329 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:45.329 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.330 [2024-12-07 17:34:18.533102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.330 [2024-12-07 17:34:18.533445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:45.330 [2024-12-07 17:34:18.533510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:45.330 [2024-12-07 17:34:18.533623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:45.330 [2024-12-07 17:34:18.533818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:45.330 [2024-12-07 17:34:18.533882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:45.330 [2024-12-07 17:34:18.534082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.330 BaseBdev2 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.330 [ 00:17:45.330 { 00:17:45.330 "name": "BaseBdev2", 00:17:45.330 "aliases": [ 00:17:45.330 "013bfdb2-d44e-44ca-8087-c7a822f6975d" 00:17:45.330 ], 00:17:45.330 "product_name": "Malloc disk", 00:17:45.330 "block_size": 4096, 00:17:45.330 "num_blocks": 8192, 00:17:45.330 "uuid": "013bfdb2-d44e-44ca-8087-c7a822f6975d", 00:17:45.330 "md_size": 32, 00:17:45.330 "md_interleave": false, 00:17:45.330 "dif_type": 0, 00:17:45.330 "assigned_rate_limits": { 00:17:45.330 "rw_ios_per_sec": 0, 00:17:45.330 "rw_mbytes_per_sec": 0, 00:17:45.330 "r_mbytes_per_sec": 0, 00:17:45.330 "w_mbytes_per_sec": 0 00:17:45.330 }, 00:17:45.330 "claimed": true, 00:17:45.330 "claim_type": "exclusive_write", 00:17:45.330 "zoned": false, 00:17:45.330 "supported_io_types": { 00:17:45.330 "read": true, 00:17:45.330 "write": true, 00:17:45.330 "unmap": true, 00:17:45.330 "flush": true, 00:17:45.330 "reset": true, 00:17:45.330 "nvme_admin": false, 00:17:45.330 "nvme_io": false, 00:17:45.330 "nvme_io_md": false, 00:17:45.330 "write_zeroes": true, 00:17:45.330 "zcopy": true, 00:17:45.330 "get_zone_info": false, 00:17:45.330 "zone_management": false, 00:17:45.330 "zone_append": false, 00:17:45.330 "compare": false, 00:17:45.330 "compare_and_write": false, 00:17:45.330 "abort": true, 00:17:45.330 "seek_hole": false, 00:17:45.330 "seek_data": false, 00:17:45.330 "copy": true, 00:17:45.330 "nvme_iov_md": false 00:17:45.330 }, 00:17:45.330 "memory_domains": [ 00:17:45.330 { 00:17:45.330 "dma_device_id": "system", 00:17:45.330 "dma_device_type": 1 00:17:45.330 }, 00:17:45.330 { 00:17:45.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.330 "dma_device_type": 2 00:17:45.330 } 00:17:45.330 ], 00:17:45.330 "driver_specific": {} 00:17:45.330 } 00:17:45.330 ] 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.330 "name": "Existed_Raid", 00:17:45.330 "uuid": "da153db4-436c-44d0-a43a-57276ea4c9dc", 00:17:45.330 "strip_size_kb": 0, 00:17:45.330 "state": "online", 00:17:45.330 "raid_level": "raid1", 00:17:45.330 "superblock": true, 00:17:45.330 "num_base_bdevs": 2, 00:17:45.330 "num_base_bdevs_discovered": 2, 00:17:45.330 "num_base_bdevs_operational": 2, 00:17:45.330 "base_bdevs_list": [ 00:17:45.330 { 00:17:45.330 "name": "BaseBdev1", 00:17:45.330 "uuid": "45ee4293-4cbe-4429-9e7d-ddf2e1721e6e", 00:17:45.330 "is_configured": true, 00:17:45.330 "data_offset": 256, 00:17:45.330 "data_size": 7936 00:17:45.330 }, 00:17:45.330 { 00:17:45.330 "name": "BaseBdev2", 00:17:45.330 "uuid": "013bfdb2-d44e-44ca-8087-c7a822f6975d", 00:17:45.330 "is_configured": true, 00:17:45.330 "data_offset": 256, 00:17:45.330 "data_size": 7936 00:17:45.330 } 00:17:45.330 ] 00:17:45.330 }' 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.330 17:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.901 [2024-12-07 17:34:19.036582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:45.901 "name": "Existed_Raid", 00:17:45.901 "aliases": [ 00:17:45.901 "da153db4-436c-44d0-a43a-57276ea4c9dc" 00:17:45.901 ], 00:17:45.901 "product_name": "Raid Volume", 00:17:45.901 "block_size": 4096, 00:17:45.901 "num_blocks": 7936, 00:17:45.901 "uuid": "da153db4-436c-44d0-a43a-57276ea4c9dc", 00:17:45.901 "md_size": 32, 00:17:45.901 "md_interleave": false, 00:17:45.901 "dif_type": 0, 00:17:45.901 "assigned_rate_limits": { 00:17:45.901 "rw_ios_per_sec": 0, 00:17:45.901 "rw_mbytes_per_sec": 0, 00:17:45.901 "r_mbytes_per_sec": 0, 00:17:45.901 "w_mbytes_per_sec": 0 00:17:45.901 }, 00:17:45.901 "claimed": false, 00:17:45.901 "zoned": false, 00:17:45.901 "supported_io_types": { 00:17:45.901 "read": true, 00:17:45.901 "write": true, 00:17:45.901 "unmap": false, 00:17:45.901 "flush": false, 00:17:45.901 "reset": true, 00:17:45.901 "nvme_admin": false, 00:17:45.901 "nvme_io": false, 00:17:45.901 "nvme_io_md": false, 00:17:45.901 "write_zeroes": true, 00:17:45.901 "zcopy": false, 00:17:45.901 "get_zone_info": false, 00:17:45.901 "zone_management": false, 00:17:45.901 "zone_append": false, 00:17:45.901 "compare": false, 00:17:45.901 "compare_and_write": false, 00:17:45.901 "abort": false, 00:17:45.901 "seek_hole": false, 00:17:45.901 "seek_data": false, 00:17:45.901 "copy": false, 00:17:45.901 "nvme_iov_md": false 00:17:45.901 }, 00:17:45.901 "memory_domains": [ 00:17:45.901 { 00:17:45.901 "dma_device_id": "system", 00:17:45.901 "dma_device_type": 1 00:17:45.901 }, 00:17:45.901 { 00:17:45.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.901 "dma_device_type": 2 00:17:45.901 }, 00:17:45.901 { 00:17:45.901 "dma_device_id": "system", 00:17:45.901 "dma_device_type": 1 00:17:45.901 }, 00:17:45.901 { 00:17:45.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.901 "dma_device_type": 2 00:17:45.901 } 00:17:45.901 ], 00:17:45.901 "driver_specific": { 00:17:45.901 "raid": { 00:17:45.901 "uuid": "da153db4-436c-44d0-a43a-57276ea4c9dc", 00:17:45.901 "strip_size_kb": 0, 00:17:45.901 "state": "online", 00:17:45.901 "raid_level": "raid1", 00:17:45.901 "superblock": true, 00:17:45.901 "num_base_bdevs": 2, 00:17:45.901 "num_base_bdevs_discovered": 2, 00:17:45.901 "num_base_bdevs_operational": 2, 00:17:45.901 "base_bdevs_list": [ 00:17:45.901 { 00:17:45.901 "name": "BaseBdev1", 00:17:45.901 "uuid": "45ee4293-4cbe-4429-9e7d-ddf2e1721e6e", 00:17:45.901 "is_configured": true, 00:17:45.901 "data_offset": 256, 00:17:45.901 "data_size": 7936 00:17:45.901 }, 00:17:45.901 { 00:17:45.901 "name": "BaseBdev2", 00:17:45.901 "uuid": "013bfdb2-d44e-44ca-8087-c7a822f6975d", 00:17:45.901 "is_configured": true, 00:17:45.901 "data_offset": 256, 00:17:45.901 "data_size": 7936 00:17:45.901 } 00:17:45.901 ] 00:17:45.901 } 00:17:45.901 } 00:17:45.901 }' 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:45.901 BaseBdev2' 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.901 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.901 [2024-12-07 17:34:19.267950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:46.161 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.161 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:46.161 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:46.161 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:46.161 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:46.161 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:46.161 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.162 "name": "Existed_Raid", 00:17:46.162 "uuid": "da153db4-436c-44d0-a43a-57276ea4c9dc", 00:17:46.162 "strip_size_kb": 0, 00:17:46.162 "state": "online", 00:17:46.162 "raid_level": "raid1", 00:17:46.162 "superblock": true, 00:17:46.162 "num_base_bdevs": 2, 00:17:46.162 "num_base_bdevs_discovered": 1, 00:17:46.162 "num_base_bdevs_operational": 1, 00:17:46.162 "base_bdevs_list": [ 00:17:46.162 { 00:17:46.162 "name": null, 00:17:46.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.162 "is_configured": false, 00:17:46.162 "data_offset": 0, 00:17:46.162 "data_size": 7936 00:17:46.162 }, 00:17:46.162 { 00:17:46.162 "name": "BaseBdev2", 00:17:46.162 "uuid": "013bfdb2-d44e-44ca-8087-c7a822f6975d", 00:17:46.162 "is_configured": true, 00:17:46.162 "data_offset": 256, 00:17:46.162 "data_size": 7936 00:17:46.162 } 00:17:46.162 ] 00:17:46.162 }' 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.162 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.731 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.732 [2024-12-07 17:34:19.862296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:46.732 [2024-12-07 17:34:19.862463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.732 [2024-12-07 17:34:19.960112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.732 [2024-12-07 17:34:19.960163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.732 [2024-12-07 17:34:19.960175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.732 17:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87178 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87178 ']' 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87178 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87178 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87178' 00:17:46.732 killing process with pid 87178 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87178 00:17:46.732 [2024-12-07 17:34:20.058691] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.732 17:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87178 00:17:46.732 [2024-12-07 17:34:20.074772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.114 17:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:48.114 00:17:48.114 real 0m4.993s 00:17:48.114 user 0m7.169s 00:17:48.114 sys 0m0.885s 00:17:48.114 17:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.114 17:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.114 ************************************ 00:17:48.114 END TEST raid_state_function_test_sb_md_separate 00:17:48.114 ************************************ 00:17:48.114 17:34:21 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:48.114 17:34:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:48.114 17:34:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.114 17:34:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:48.114 ************************************ 00:17:48.114 START TEST raid_superblock_test_md_separate 00:17:48.114 ************************************ 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87426 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87426 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87426 ']' 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.114 17:34:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:48.114 [2024-12-07 17:34:21.298165] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:48.114 [2024-12-07 17:34:21.298297] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87426 ] 00:17:48.114 [2024-12-07 17:34:21.470681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.374 [2024-12-07 17:34:21.576513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.634 [2024-12-07 17:34:21.767465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.634 [2024-12-07 17:34:21.767609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:48.893 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.894 malloc1 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.894 [2024-12-07 17:34:22.162856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:48.894 [2024-12-07 17:34:22.162977] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.894 [2024-12-07 17:34:22.163017] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:48.894 [2024-12-07 17:34:22.163046] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.894 [2024-12-07 17:34:22.164872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.894 [2024-12-07 17:34:22.164965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:48.894 pt1 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.894 malloc2 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.894 [2024-12-07 17:34:22.221955] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:48.894 [2024-12-07 17:34:22.222008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.894 [2024-12-07 17:34:22.222028] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:48.894 [2024-12-07 17:34:22.222037] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.894 [2024-12-07 17:34:22.223879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.894 [2024-12-07 17:34:22.223919] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:48.894 pt2 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.894 [2024-12-07 17:34:22.233961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.894 [2024-12-07 17:34:22.235695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.894 [2024-12-07 17:34:22.235873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:48.894 [2024-12-07 17:34:22.235888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:48.894 [2024-12-07 17:34:22.235976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:48.894 [2024-12-07 17:34:22.236095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:48.894 [2024-12-07 17:34:22.236107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:48.894 [2024-12-07 17:34:22.236213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.894 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.152 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.152 "name": "raid_bdev1", 00:17:49.152 "uuid": "21adc1c0-d614-41b0-aa05-008a469b96b1", 00:17:49.152 "strip_size_kb": 0, 00:17:49.152 "state": "online", 00:17:49.152 "raid_level": "raid1", 00:17:49.152 "superblock": true, 00:17:49.152 "num_base_bdevs": 2, 00:17:49.152 "num_base_bdevs_discovered": 2, 00:17:49.152 "num_base_bdevs_operational": 2, 00:17:49.152 "base_bdevs_list": [ 00:17:49.152 { 00:17:49.152 "name": "pt1", 00:17:49.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.152 "is_configured": true, 00:17:49.152 "data_offset": 256, 00:17:49.152 "data_size": 7936 00:17:49.152 }, 00:17:49.152 { 00:17:49.152 "name": "pt2", 00:17:49.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.152 "is_configured": true, 00:17:49.152 "data_offset": 256, 00:17:49.152 "data_size": 7936 00:17:49.152 } 00:17:49.152 ] 00:17:49.152 }' 00:17:49.152 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.152 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.412 [2024-12-07 17:34:22.629509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:49.412 "name": "raid_bdev1", 00:17:49.412 "aliases": [ 00:17:49.412 "21adc1c0-d614-41b0-aa05-008a469b96b1" 00:17:49.412 ], 00:17:49.412 "product_name": "Raid Volume", 00:17:49.412 "block_size": 4096, 00:17:49.412 "num_blocks": 7936, 00:17:49.412 "uuid": "21adc1c0-d614-41b0-aa05-008a469b96b1", 00:17:49.412 "md_size": 32, 00:17:49.412 "md_interleave": false, 00:17:49.412 "dif_type": 0, 00:17:49.412 "assigned_rate_limits": { 00:17:49.412 "rw_ios_per_sec": 0, 00:17:49.412 "rw_mbytes_per_sec": 0, 00:17:49.412 "r_mbytes_per_sec": 0, 00:17:49.412 "w_mbytes_per_sec": 0 00:17:49.412 }, 00:17:49.412 "claimed": false, 00:17:49.412 "zoned": false, 00:17:49.412 "supported_io_types": { 00:17:49.412 "read": true, 00:17:49.412 "write": true, 00:17:49.412 "unmap": false, 00:17:49.412 "flush": false, 00:17:49.412 "reset": true, 00:17:49.412 "nvme_admin": false, 00:17:49.412 "nvme_io": false, 00:17:49.412 "nvme_io_md": false, 00:17:49.412 "write_zeroes": true, 00:17:49.412 "zcopy": false, 00:17:49.412 "get_zone_info": false, 00:17:49.412 "zone_management": false, 00:17:49.412 "zone_append": false, 00:17:49.412 "compare": false, 00:17:49.412 "compare_and_write": false, 00:17:49.412 "abort": false, 00:17:49.412 "seek_hole": false, 00:17:49.412 "seek_data": false, 00:17:49.412 "copy": false, 00:17:49.412 "nvme_iov_md": false 00:17:49.412 }, 00:17:49.412 "memory_domains": [ 00:17:49.412 { 00:17:49.412 "dma_device_id": "system", 00:17:49.412 "dma_device_type": 1 00:17:49.412 }, 00:17:49.412 { 00:17:49.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.412 "dma_device_type": 2 00:17:49.412 }, 00:17:49.412 { 00:17:49.412 "dma_device_id": "system", 00:17:49.412 "dma_device_type": 1 00:17:49.412 }, 00:17:49.412 { 00:17:49.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.412 "dma_device_type": 2 00:17:49.412 } 00:17:49.412 ], 00:17:49.412 "driver_specific": { 00:17:49.412 "raid": { 00:17:49.412 "uuid": "21adc1c0-d614-41b0-aa05-008a469b96b1", 00:17:49.412 "strip_size_kb": 0, 00:17:49.412 "state": "online", 00:17:49.412 "raid_level": "raid1", 00:17:49.412 "superblock": true, 00:17:49.412 "num_base_bdevs": 2, 00:17:49.412 "num_base_bdevs_discovered": 2, 00:17:49.412 "num_base_bdevs_operational": 2, 00:17:49.412 "base_bdevs_list": [ 00:17:49.412 { 00:17:49.412 "name": "pt1", 00:17:49.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.412 "is_configured": true, 00:17:49.412 "data_offset": 256, 00:17:49.412 "data_size": 7936 00:17:49.412 }, 00:17:49.412 { 00:17:49.412 "name": "pt2", 00:17:49.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.412 "is_configured": true, 00:17:49.412 "data_offset": 256, 00:17:49.412 "data_size": 7936 00:17:49.412 } 00:17:49.412 ] 00:17:49.412 } 00:17:49.412 } 00:17:49.412 }' 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:49.412 pt2' 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.412 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:49.672 [2024-12-07 17:34:22.849083] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=21adc1c0-d614-41b0-aa05-008a469b96b1 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 21adc1c0-d614-41b0-aa05-008a469b96b1 ']' 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.672 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.672 [2024-12-07 17:34:22.896741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.672 [2024-12-07 17:34:22.896763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.672 [2024-12-07 17:34:22.896840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.672 [2024-12-07 17:34:22.896897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.672 [2024-12-07 17:34:22.896907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:49.673 17:34:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.673 [2024-12-07 17:34:23.032553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:49.673 [2024-12-07 17:34:23.034457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:49.673 [2024-12-07 17:34:23.034531] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:49.673 [2024-12-07 17:34:23.034587] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:49.673 [2024-12-07 17:34:23.034602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.673 [2024-12-07 17:34:23.034612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:49.673 request: 00:17:49.673 { 00:17:49.673 "name": "raid_bdev1", 00:17:49.673 "raid_level": "raid1", 00:17:49.673 "base_bdevs": [ 00:17:49.673 "malloc1", 00:17:49.673 "malloc2" 00:17:49.673 ], 00:17:49.673 "superblock": false, 00:17:49.673 "method": "bdev_raid_create", 00:17:49.673 "req_id": 1 00:17:49.673 } 00:17:49.673 Got JSON-RPC error response 00:17:49.673 response: 00:17:49.673 { 00:17:49.673 "code": -17, 00:17:49.673 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:49.673 } 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.673 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.933 [2024-12-07 17:34:23.088428] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:49.933 [2024-12-07 17:34:23.088538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.933 [2024-12-07 17:34:23.088572] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:49.933 [2024-12-07 17:34:23.088627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.933 [2024-12-07 17:34:23.090683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.933 [2024-12-07 17:34:23.090771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:49.933 [2024-12-07 17:34:23.090842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:49.933 [2024-12-07 17:34:23.090918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:49.933 pt1 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.933 "name": "raid_bdev1", 00:17:49.933 "uuid": "21adc1c0-d614-41b0-aa05-008a469b96b1", 00:17:49.933 "strip_size_kb": 0, 00:17:49.933 "state": "configuring", 00:17:49.933 "raid_level": "raid1", 00:17:49.933 "superblock": true, 00:17:49.933 "num_base_bdevs": 2, 00:17:49.933 "num_base_bdevs_discovered": 1, 00:17:49.933 "num_base_bdevs_operational": 2, 00:17:49.933 "base_bdevs_list": [ 00:17:49.933 { 00:17:49.933 "name": "pt1", 00:17:49.933 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.933 "is_configured": true, 00:17:49.933 "data_offset": 256, 00:17:49.933 "data_size": 7936 00:17:49.933 }, 00:17:49.933 { 00:17:49.933 "name": null, 00:17:49.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.933 "is_configured": false, 00:17:49.933 "data_offset": 256, 00:17:49.933 "data_size": 7936 00:17:49.933 } 00:17:49.933 ] 00:17:49.933 }' 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.933 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.192 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:50.192 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:50.192 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:50.192 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.192 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.192 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.192 [2024-12-07 17:34:23.515684] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.192 [2024-12-07 17:34:23.515750] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.192 [2024-12-07 17:34:23.515772] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:50.192 [2024-12-07 17:34:23.515782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.193 [2024-12-07 17:34:23.515993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.193 [2024-12-07 17:34:23.516011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.193 [2024-12-07 17:34:23.516056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:50.193 [2024-12-07 17:34:23.516076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.193 [2024-12-07 17:34:23.516183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:50.193 [2024-12-07 17:34:23.516194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.193 [2024-12-07 17:34:23.516267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:50.193 [2024-12-07 17:34:23.516384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:50.193 [2024-12-07 17:34:23.516400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:50.193 [2024-12-07 17:34:23.516524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.193 pt2 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.193 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.452 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.452 "name": "raid_bdev1", 00:17:50.452 "uuid": "21adc1c0-d614-41b0-aa05-008a469b96b1", 00:17:50.452 "strip_size_kb": 0, 00:17:50.452 "state": "online", 00:17:50.452 "raid_level": "raid1", 00:17:50.452 "superblock": true, 00:17:50.452 "num_base_bdevs": 2, 00:17:50.452 "num_base_bdevs_discovered": 2, 00:17:50.452 "num_base_bdevs_operational": 2, 00:17:50.452 "base_bdevs_list": [ 00:17:50.452 { 00:17:50.452 "name": "pt1", 00:17:50.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.452 "is_configured": true, 00:17:50.452 "data_offset": 256, 00:17:50.452 "data_size": 7936 00:17:50.452 }, 00:17:50.452 { 00:17:50.452 "name": "pt2", 00:17:50.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.452 "is_configured": true, 00:17:50.452 "data_offset": 256, 00:17:50.452 "data_size": 7936 00:17:50.452 } 00:17:50.452 ] 00:17:50.452 }' 00:17:50.452 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.452 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.712 [2024-12-07 17:34:23.931352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.712 "name": "raid_bdev1", 00:17:50.712 "aliases": [ 00:17:50.712 "21adc1c0-d614-41b0-aa05-008a469b96b1" 00:17:50.712 ], 00:17:50.712 "product_name": "Raid Volume", 00:17:50.712 "block_size": 4096, 00:17:50.712 "num_blocks": 7936, 00:17:50.712 "uuid": "21adc1c0-d614-41b0-aa05-008a469b96b1", 00:17:50.712 "md_size": 32, 00:17:50.712 "md_interleave": false, 00:17:50.712 "dif_type": 0, 00:17:50.712 "assigned_rate_limits": { 00:17:50.712 "rw_ios_per_sec": 0, 00:17:50.712 "rw_mbytes_per_sec": 0, 00:17:50.712 "r_mbytes_per_sec": 0, 00:17:50.712 "w_mbytes_per_sec": 0 00:17:50.712 }, 00:17:50.712 "claimed": false, 00:17:50.712 "zoned": false, 00:17:50.712 "supported_io_types": { 00:17:50.712 "read": true, 00:17:50.712 "write": true, 00:17:50.712 "unmap": false, 00:17:50.712 "flush": false, 00:17:50.712 "reset": true, 00:17:50.712 "nvme_admin": false, 00:17:50.712 "nvme_io": false, 00:17:50.712 "nvme_io_md": false, 00:17:50.712 "write_zeroes": true, 00:17:50.712 "zcopy": false, 00:17:50.712 "get_zone_info": false, 00:17:50.712 "zone_management": false, 00:17:50.712 "zone_append": false, 00:17:50.712 "compare": false, 00:17:50.712 "compare_and_write": false, 00:17:50.712 "abort": false, 00:17:50.712 "seek_hole": false, 00:17:50.712 "seek_data": false, 00:17:50.712 "copy": false, 00:17:50.712 "nvme_iov_md": false 00:17:50.712 }, 00:17:50.712 "memory_domains": [ 00:17:50.712 { 00:17:50.712 "dma_device_id": "system", 00:17:50.712 "dma_device_type": 1 00:17:50.712 }, 00:17:50.712 { 00:17:50.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.712 "dma_device_type": 2 00:17:50.712 }, 00:17:50.712 { 00:17:50.712 "dma_device_id": "system", 00:17:50.712 "dma_device_type": 1 00:17:50.712 }, 00:17:50.712 { 00:17:50.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.712 "dma_device_type": 2 00:17:50.712 } 00:17:50.712 ], 00:17:50.712 "driver_specific": { 00:17:50.712 "raid": { 00:17:50.712 "uuid": "21adc1c0-d614-41b0-aa05-008a469b96b1", 00:17:50.712 "strip_size_kb": 0, 00:17:50.712 "state": "online", 00:17:50.712 "raid_level": "raid1", 00:17:50.712 "superblock": true, 00:17:50.712 "num_base_bdevs": 2, 00:17:50.712 "num_base_bdevs_discovered": 2, 00:17:50.712 "num_base_bdevs_operational": 2, 00:17:50.712 "base_bdevs_list": [ 00:17:50.712 { 00:17:50.712 "name": "pt1", 00:17:50.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.712 "is_configured": true, 00:17:50.712 "data_offset": 256, 00:17:50.712 "data_size": 7936 00:17:50.712 }, 00:17:50.712 { 00:17:50.712 "name": "pt2", 00:17:50.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.712 "is_configured": true, 00:17:50.712 "data_offset": 256, 00:17:50.712 "data_size": 7936 00:17:50.712 } 00:17:50.712 ] 00:17:50.712 } 00:17:50.712 } 00:17:50.712 }' 00:17:50.712 17:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.712 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:50.712 pt2' 00:17:50.712 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.712 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:50.712 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.712 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.712 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:50.712 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.712 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.972 [2024-12-07 17:34:24.182908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 21adc1c0-d614-41b0-aa05-008a469b96b1 '!=' 21adc1c0-d614-41b0-aa05-008a469b96b1 ']' 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.972 [2024-12-07 17:34:24.250591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.972 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.973 "name": "raid_bdev1", 00:17:50.973 "uuid": "21adc1c0-d614-41b0-aa05-008a469b96b1", 00:17:50.973 "strip_size_kb": 0, 00:17:50.973 "state": "online", 00:17:50.973 "raid_level": "raid1", 00:17:50.973 "superblock": true, 00:17:50.973 "num_base_bdevs": 2, 00:17:50.973 "num_base_bdevs_discovered": 1, 00:17:50.973 "num_base_bdevs_operational": 1, 00:17:50.973 "base_bdevs_list": [ 00:17:50.973 { 00:17:50.973 "name": null, 00:17:50.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.973 "is_configured": false, 00:17:50.973 "data_offset": 0, 00:17:50.973 "data_size": 7936 00:17:50.973 }, 00:17:50.973 { 00:17:50.973 "name": "pt2", 00:17:50.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.973 "is_configured": true, 00:17:50.973 "data_offset": 256, 00:17:50.973 "data_size": 7936 00:17:50.973 } 00:17:50.973 ] 00:17:50.973 }' 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.973 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.542 [2024-12-07 17:34:24.725788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.542 [2024-12-07 17:34:24.725868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.542 [2024-12-07 17:34:24.725976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.542 [2024-12-07 17:34:24.726045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.542 [2024-12-07 17:34:24.726103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.542 [2024-12-07 17:34:24.793657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.542 [2024-12-07 17:34:24.793744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.542 [2024-12-07 17:34:24.793780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:51.542 [2024-12-07 17:34:24.793791] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.542 [2024-12-07 17:34:24.795806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.542 [2024-12-07 17:34:24.795891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.542 [2024-12-07 17:34:24.795988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:51.542 [2024-12-07 17:34:24.796060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.542 [2024-12-07 17:34:24.796201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:51.542 [2024-12-07 17:34:24.796245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:51.542 [2024-12-07 17:34:24.796339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:51.542 [2024-12-07 17:34:24.796492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:51.542 [2024-12-07 17:34:24.796503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:51.542 [2024-12-07 17:34:24.796604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.542 pt2 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.542 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.542 "name": "raid_bdev1", 00:17:51.542 "uuid": "21adc1c0-d614-41b0-aa05-008a469b96b1", 00:17:51.542 "strip_size_kb": 0, 00:17:51.542 "state": "online", 00:17:51.542 "raid_level": "raid1", 00:17:51.542 "superblock": true, 00:17:51.542 "num_base_bdevs": 2, 00:17:51.542 "num_base_bdevs_discovered": 1, 00:17:51.542 "num_base_bdevs_operational": 1, 00:17:51.542 "base_bdevs_list": [ 00:17:51.542 { 00:17:51.542 "name": null, 00:17:51.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.542 "is_configured": false, 00:17:51.542 "data_offset": 256, 00:17:51.542 "data_size": 7936 00:17:51.542 }, 00:17:51.542 { 00:17:51.542 "name": "pt2", 00:17:51.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.542 "is_configured": true, 00:17:51.542 "data_offset": 256, 00:17:51.542 "data_size": 7936 00:17:51.542 } 00:17:51.542 ] 00:17:51.542 }' 00:17:51.543 17:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.543 17:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.112 [2024-12-07 17:34:25.216984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.112 [2024-12-07 17:34:25.217062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.112 [2024-12-07 17:34:25.217172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.112 [2024-12-07 17:34:25.217240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.112 [2024-12-07 17:34:25.217293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.112 [2024-12-07 17:34:25.280875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.112 [2024-12-07 17:34:25.280944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.112 [2024-12-07 17:34:25.280966] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:52.112 [2024-12-07 17:34:25.280975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.112 [2024-12-07 17:34:25.282886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.112 [2024-12-07 17:34:25.282921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.112 [2024-12-07 17:34:25.282981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:52.112 [2024-12-07 17:34:25.283025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.112 [2024-12-07 17:34:25.283142] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:52.112 [2024-12-07 17:34:25.283152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.112 [2024-12-07 17:34:25.283169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:52.112 [2024-12-07 17:34:25.283239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.112 [2024-12-07 17:34:25.283319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:52.112 [2024-12-07 17:34:25.283327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:52.112 [2024-12-07 17:34:25.283388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:52.112 [2024-12-07 17:34:25.283490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:52.112 [2024-12-07 17:34:25.283517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:52.112 [2024-12-07 17:34:25.283629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.112 pt1 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.112 "name": "raid_bdev1", 00:17:52.112 "uuid": "21adc1c0-d614-41b0-aa05-008a469b96b1", 00:17:52.112 "strip_size_kb": 0, 00:17:52.112 "state": "online", 00:17:52.112 "raid_level": "raid1", 00:17:52.112 "superblock": true, 00:17:52.112 "num_base_bdevs": 2, 00:17:52.112 "num_base_bdevs_discovered": 1, 00:17:52.112 "num_base_bdevs_operational": 1, 00:17:52.112 "base_bdevs_list": [ 00:17:52.112 { 00:17:52.112 "name": null, 00:17:52.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.112 "is_configured": false, 00:17:52.112 "data_offset": 256, 00:17:52.112 "data_size": 7936 00:17:52.112 }, 00:17:52.112 { 00:17:52.112 "name": "pt2", 00:17:52.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.112 "is_configured": true, 00:17:52.112 "data_offset": 256, 00:17:52.112 "data_size": 7936 00:17:52.112 } 00:17:52.112 ] 00:17:52.112 }' 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.112 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.371 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:52.371 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:52.371 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.371 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.371 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.372 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:52.372 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.372 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:52.372 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.372 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.372 [2024-12-07 17:34:25.732302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.372 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.631 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 21adc1c0-d614-41b0-aa05-008a469b96b1 '!=' 21adc1c0-d614-41b0-aa05-008a469b96b1 ']' 00:17:52.631 17:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87426 00:17:52.631 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87426 ']' 00:17:52.631 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87426 00:17:52.632 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:52.632 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.632 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87426 00:17:52.632 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.632 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.632 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87426' 00:17:52.632 killing process with pid 87426 00:17:52.632 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87426 00:17:52.632 [2024-12-07 17:34:25.802069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.632 [2024-12-07 17:34:25.802196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.632 [2024-12-07 17:34:25.802271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 17:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87426 00:17:52.632 ee all in destruct 00:17:52.632 [2024-12-07 17:34:25.802328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:52.632 [2024-12-07 17:34:26.008691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.012 17:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:54.012 00:17:54.012 real 0m5.868s 00:17:54.012 user 0m8.864s 00:17:54.012 sys 0m1.047s 00:17:54.012 17:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.012 ************************************ 00:17:54.012 END TEST raid_superblock_test_md_separate 00:17:54.012 ************************************ 00:17:54.012 17:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.012 17:34:27 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:54.012 17:34:27 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:54.012 17:34:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:54.012 17:34:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.012 17:34:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.012 ************************************ 00:17:54.012 START TEST raid_rebuild_test_sb_md_separate 00:17:54.012 ************************************ 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87750 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87750 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87750 ']' 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.012 17:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.012 [2024-12-07 17:34:27.253968] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:54.012 [2024-12-07 17:34:27.254180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:54.012 Zero copy mechanism will not be used. 00:17:54.012 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87750 ] 00:17:54.272 [2024-12-07 17:34:27.424358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.272 [2024-12-07 17:34:27.535368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.532 [2024-12-07 17:34:27.731273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.532 [2024-12-07 17:34:27.731387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.792 BaseBdev1_malloc 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.792 [2024-12-07 17:34:28.126031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:54.792 [2024-12-07 17:34:28.126090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.792 [2024-12-07 17:34:28.126128] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:54.792 [2024-12-07 17:34:28.126139] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.792 [2024-12-07 17:34:28.127982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.792 [2024-12-07 17:34:28.128021] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:54.792 BaseBdev1 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.792 BaseBdev2_malloc 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.792 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.058 [2024-12-07 17:34:28.176491] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:55.058 [2024-12-07 17:34:28.176552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.058 [2024-12-07 17:34:28.176572] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:55.058 [2024-12-07 17:34:28.176585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.058 [2024-12-07 17:34:28.178433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.058 [2024-12-07 17:34:28.178475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:55.058 BaseBdev2 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.058 spare_malloc 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.058 spare_delay 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.058 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.058 [2024-12-07 17:34:28.251187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:55.058 [2024-12-07 17:34:28.251308] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.059 [2024-12-07 17:34:28.251332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:55.059 [2024-12-07 17:34:28.251343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.059 [2024-12-07 17:34:28.253284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.059 [2024-12-07 17:34:28.253325] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:55.059 spare 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.059 [2024-12-07 17:34:28.263206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.059 [2024-12-07 17:34:28.264912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.059 [2024-12-07 17:34:28.265097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:55.059 [2024-12-07 17:34:28.265113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:55.059 [2024-12-07 17:34:28.265183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:55.059 [2024-12-07 17:34:28.265308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:55.059 [2024-12-07 17:34:28.265326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:55.059 [2024-12-07 17:34:28.265423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.059 "name": "raid_bdev1", 00:17:55.059 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:17:55.059 "strip_size_kb": 0, 00:17:55.059 "state": "online", 00:17:55.059 "raid_level": "raid1", 00:17:55.059 "superblock": true, 00:17:55.059 "num_base_bdevs": 2, 00:17:55.059 "num_base_bdevs_discovered": 2, 00:17:55.059 "num_base_bdevs_operational": 2, 00:17:55.059 "base_bdevs_list": [ 00:17:55.059 { 00:17:55.059 "name": "BaseBdev1", 00:17:55.059 "uuid": "aa1dd53b-8c1c-5036-9492-fc4db480d5f6", 00:17:55.059 "is_configured": true, 00:17:55.059 "data_offset": 256, 00:17:55.059 "data_size": 7936 00:17:55.059 }, 00:17:55.059 { 00:17:55.059 "name": "BaseBdev2", 00:17:55.059 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:17:55.059 "is_configured": true, 00:17:55.059 "data_offset": 256, 00:17:55.059 "data_size": 7936 00:17:55.059 } 00:17:55.059 ] 00:17:55.059 }' 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.059 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.372 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.372 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:55.372 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.372 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.372 [2024-12-07 17:34:28.710740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:55.642 17:34:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:55.642 [2024-12-07 17:34:28.982051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:55.642 /dev/nbd0 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:55.903 1+0 records in 00:17:55.903 1+0 records out 00:17:55.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502323 s, 8.2 MB/s 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:55.903 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:55.904 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:55.904 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:55.904 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:56.474 7936+0 records in 00:17:56.474 7936+0 records out 00:17:56.474 32505856 bytes (33 MB, 31 MiB) copied, 0.54524 s, 59.6 MB/s 00:17:56.474 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:56.474 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:56.474 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:56.475 [2024-12-07 17:34:29.824373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.475 [2024-12-07 17:34:29.838483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.475 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.734 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.734 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.734 "name": "raid_bdev1", 00:17:56.734 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:17:56.734 "strip_size_kb": 0, 00:17:56.735 "state": "online", 00:17:56.735 "raid_level": "raid1", 00:17:56.735 "superblock": true, 00:17:56.735 "num_base_bdevs": 2, 00:17:56.735 "num_base_bdevs_discovered": 1, 00:17:56.735 "num_base_bdevs_operational": 1, 00:17:56.735 "base_bdevs_list": [ 00:17:56.735 { 00:17:56.735 "name": null, 00:17:56.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.735 "is_configured": false, 00:17:56.735 "data_offset": 0, 00:17:56.735 "data_size": 7936 00:17:56.735 }, 00:17:56.735 { 00:17:56.735 "name": "BaseBdev2", 00:17:56.735 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:17:56.735 "is_configured": true, 00:17:56.735 "data_offset": 256, 00:17:56.735 "data_size": 7936 00:17:56.735 } 00:17:56.735 ] 00:17:56.735 }' 00:17:56.735 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.735 17:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.995 17:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:56.995 17:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.995 17:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.995 [2024-12-07 17:34:30.265797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.995 [2024-12-07 17:34:30.279914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:56.995 17:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.995 17:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:56.995 [2024-12-07 17:34:30.281757] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.935 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.935 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.935 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.935 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.935 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.935 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.935 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.935 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.935 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.935 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.196 "name": "raid_bdev1", 00:17:58.196 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:17:58.196 "strip_size_kb": 0, 00:17:58.196 "state": "online", 00:17:58.196 "raid_level": "raid1", 00:17:58.196 "superblock": true, 00:17:58.196 "num_base_bdevs": 2, 00:17:58.196 "num_base_bdevs_discovered": 2, 00:17:58.196 "num_base_bdevs_operational": 2, 00:17:58.196 "process": { 00:17:58.196 "type": "rebuild", 00:17:58.196 "target": "spare", 00:17:58.196 "progress": { 00:17:58.196 "blocks": 2560, 00:17:58.196 "percent": 32 00:17:58.196 } 00:17:58.196 }, 00:17:58.196 "base_bdevs_list": [ 00:17:58.196 { 00:17:58.196 "name": "spare", 00:17:58.196 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:17:58.196 "is_configured": true, 00:17:58.196 "data_offset": 256, 00:17:58.196 "data_size": 7936 00:17:58.196 }, 00:17:58.196 { 00:17:58.196 "name": "BaseBdev2", 00:17:58.196 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:17:58.196 "is_configured": true, 00:17:58.196 "data_offset": 256, 00:17:58.196 "data_size": 7936 00:17:58.196 } 00:17:58.196 ] 00:17:58.196 }' 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.196 [2024-12-07 17:34:31.441767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.196 [2024-12-07 17:34:31.486789] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:58.196 [2024-12-07 17:34:31.486893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.196 [2024-12-07 17:34:31.486957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.196 [2024-12-07 17:34:31.486985] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.196 "name": "raid_bdev1", 00:17:58.196 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:17:58.196 "strip_size_kb": 0, 00:17:58.196 "state": "online", 00:17:58.196 "raid_level": "raid1", 00:17:58.196 "superblock": true, 00:17:58.196 "num_base_bdevs": 2, 00:17:58.196 "num_base_bdevs_discovered": 1, 00:17:58.196 "num_base_bdevs_operational": 1, 00:17:58.196 "base_bdevs_list": [ 00:17:58.196 { 00:17:58.196 "name": null, 00:17:58.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.196 "is_configured": false, 00:17:58.196 "data_offset": 0, 00:17:58.196 "data_size": 7936 00:17:58.196 }, 00:17:58.196 { 00:17:58.196 "name": "BaseBdev2", 00:17:58.196 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:17:58.196 "is_configured": true, 00:17:58.196 "data_offset": 256, 00:17:58.196 "data_size": 7936 00:17:58.196 } 00:17:58.196 ] 00:17:58.196 }' 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.196 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.767 17:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.767 "name": "raid_bdev1", 00:17:58.767 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:17:58.767 "strip_size_kb": 0, 00:17:58.767 "state": "online", 00:17:58.767 "raid_level": "raid1", 00:17:58.767 "superblock": true, 00:17:58.767 "num_base_bdevs": 2, 00:17:58.767 "num_base_bdevs_discovered": 1, 00:17:58.767 "num_base_bdevs_operational": 1, 00:17:58.767 "base_bdevs_list": [ 00:17:58.767 { 00:17:58.767 "name": null, 00:17:58.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.767 "is_configured": false, 00:17:58.767 "data_offset": 0, 00:17:58.767 "data_size": 7936 00:17:58.767 }, 00:17:58.767 { 00:17:58.767 "name": "BaseBdev2", 00:17:58.767 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:17:58.767 "is_configured": true, 00:17:58.767 "data_offset": 256, 00:17:58.767 "data_size": 7936 00:17:58.767 } 00:17:58.767 ] 00:17:58.767 }' 00:17:58.767 17:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.767 17:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.767 17:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.767 17:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.767 17:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.767 17:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.767 17:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.767 [2024-12-07 17:34:32.085082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.767 [2024-12-07 17:34:32.098953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:58.767 17:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.767 17:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:58.767 [2024-12-07 17:34:32.100797] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.152 "name": "raid_bdev1", 00:18:00.152 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:00.152 "strip_size_kb": 0, 00:18:00.152 "state": "online", 00:18:00.152 "raid_level": "raid1", 00:18:00.152 "superblock": true, 00:18:00.152 "num_base_bdevs": 2, 00:18:00.152 "num_base_bdevs_discovered": 2, 00:18:00.152 "num_base_bdevs_operational": 2, 00:18:00.152 "process": { 00:18:00.152 "type": "rebuild", 00:18:00.152 "target": "spare", 00:18:00.152 "progress": { 00:18:00.152 "blocks": 2560, 00:18:00.152 "percent": 32 00:18:00.152 } 00:18:00.152 }, 00:18:00.152 "base_bdevs_list": [ 00:18:00.152 { 00:18:00.152 "name": "spare", 00:18:00.152 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:18:00.152 "is_configured": true, 00:18:00.152 "data_offset": 256, 00:18:00.152 "data_size": 7936 00:18:00.152 }, 00:18:00.152 { 00:18:00.152 "name": "BaseBdev2", 00:18:00.152 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:00.152 "is_configured": true, 00:18:00.152 "data_offset": 256, 00:18:00.152 "data_size": 7936 00:18:00.152 } 00:18:00.152 ] 00:18:00.152 }' 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:00.152 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=704 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.152 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.152 "name": "raid_bdev1", 00:18:00.152 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:00.152 "strip_size_kb": 0, 00:18:00.152 "state": "online", 00:18:00.152 "raid_level": "raid1", 00:18:00.152 "superblock": true, 00:18:00.152 "num_base_bdevs": 2, 00:18:00.152 "num_base_bdevs_discovered": 2, 00:18:00.152 "num_base_bdevs_operational": 2, 00:18:00.152 "process": { 00:18:00.152 "type": "rebuild", 00:18:00.152 "target": "spare", 00:18:00.152 "progress": { 00:18:00.152 "blocks": 2816, 00:18:00.152 "percent": 35 00:18:00.152 } 00:18:00.152 }, 00:18:00.152 "base_bdevs_list": [ 00:18:00.152 { 00:18:00.152 "name": "spare", 00:18:00.152 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:18:00.152 "is_configured": true, 00:18:00.152 "data_offset": 256, 00:18:00.152 "data_size": 7936 00:18:00.152 }, 00:18:00.152 { 00:18:00.152 "name": "BaseBdev2", 00:18:00.152 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:00.153 "is_configured": true, 00:18:00.153 "data_offset": 256, 00:18:00.153 "data_size": 7936 00:18:00.153 } 00:18:00.153 ] 00:18:00.153 }' 00:18:00.153 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.153 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.153 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.153 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.153 17:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.092 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.092 "name": "raid_bdev1", 00:18:01.093 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:01.093 "strip_size_kb": 0, 00:18:01.093 "state": "online", 00:18:01.093 "raid_level": "raid1", 00:18:01.093 "superblock": true, 00:18:01.093 "num_base_bdevs": 2, 00:18:01.093 "num_base_bdevs_discovered": 2, 00:18:01.093 "num_base_bdevs_operational": 2, 00:18:01.093 "process": { 00:18:01.093 "type": "rebuild", 00:18:01.093 "target": "spare", 00:18:01.093 "progress": { 00:18:01.093 "blocks": 5632, 00:18:01.093 "percent": 70 00:18:01.093 } 00:18:01.093 }, 00:18:01.093 "base_bdevs_list": [ 00:18:01.093 { 00:18:01.093 "name": "spare", 00:18:01.093 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:18:01.093 "is_configured": true, 00:18:01.093 "data_offset": 256, 00:18:01.093 "data_size": 7936 00:18:01.093 }, 00:18:01.093 { 00:18:01.093 "name": "BaseBdev2", 00:18:01.093 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:01.093 "is_configured": true, 00:18:01.093 "data_offset": 256, 00:18:01.093 "data_size": 7936 00:18:01.093 } 00:18:01.093 ] 00:18:01.093 }' 00:18:01.093 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.093 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.093 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.352 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.352 17:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.919 [2024-12-07 17:34:35.213403] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:01.919 [2024-12-07 17:34:35.213558] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:01.919 [2024-12-07 17:34:35.213693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.177 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.436 "name": "raid_bdev1", 00:18:02.436 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:02.436 "strip_size_kb": 0, 00:18:02.436 "state": "online", 00:18:02.436 "raid_level": "raid1", 00:18:02.436 "superblock": true, 00:18:02.436 "num_base_bdevs": 2, 00:18:02.436 "num_base_bdevs_discovered": 2, 00:18:02.436 "num_base_bdevs_operational": 2, 00:18:02.436 "base_bdevs_list": [ 00:18:02.436 { 00:18:02.436 "name": "spare", 00:18:02.436 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:18:02.436 "is_configured": true, 00:18:02.436 "data_offset": 256, 00:18:02.436 "data_size": 7936 00:18:02.436 }, 00:18:02.436 { 00:18:02.436 "name": "BaseBdev2", 00:18:02.436 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:02.436 "is_configured": true, 00:18:02.436 "data_offset": 256, 00:18:02.436 "data_size": 7936 00:18:02.436 } 00:18:02.436 ] 00:18:02.436 }' 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.436 "name": "raid_bdev1", 00:18:02.436 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:02.436 "strip_size_kb": 0, 00:18:02.436 "state": "online", 00:18:02.436 "raid_level": "raid1", 00:18:02.436 "superblock": true, 00:18:02.436 "num_base_bdevs": 2, 00:18:02.436 "num_base_bdevs_discovered": 2, 00:18:02.436 "num_base_bdevs_operational": 2, 00:18:02.436 "base_bdevs_list": [ 00:18:02.436 { 00:18:02.436 "name": "spare", 00:18:02.436 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:18:02.436 "is_configured": true, 00:18:02.436 "data_offset": 256, 00:18:02.436 "data_size": 7936 00:18:02.436 }, 00:18:02.436 { 00:18:02.436 "name": "BaseBdev2", 00:18:02.436 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:02.436 "is_configured": true, 00:18:02.436 "data_offset": 256, 00:18:02.436 "data_size": 7936 00:18:02.436 } 00:18:02.436 ] 00:18:02.436 }' 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.436 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.696 "name": "raid_bdev1", 00:18:02.696 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:02.696 "strip_size_kb": 0, 00:18:02.696 "state": "online", 00:18:02.696 "raid_level": "raid1", 00:18:02.696 "superblock": true, 00:18:02.696 "num_base_bdevs": 2, 00:18:02.696 "num_base_bdevs_discovered": 2, 00:18:02.696 "num_base_bdevs_operational": 2, 00:18:02.696 "base_bdevs_list": [ 00:18:02.696 { 00:18:02.696 "name": "spare", 00:18:02.696 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:18:02.696 "is_configured": true, 00:18:02.696 "data_offset": 256, 00:18:02.696 "data_size": 7936 00:18:02.696 }, 00:18:02.696 { 00:18:02.696 "name": "BaseBdev2", 00:18:02.696 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:02.696 "is_configured": true, 00:18:02.696 "data_offset": 256, 00:18:02.696 "data_size": 7936 00:18:02.696 } 00:18:02.696 ] 00:18:02.696 }' 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.696 17:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.956 [2024-12-07 17:34:36.254689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.956 [2024-12-07 17:34:36.254763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.956 [2024-12-07 17:34:36.254862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.956 [2024-12-07 17:34:36.254974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.956 [2024-12-07 17:34:36.255021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:02.956 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:03.216 /dev/nbd0 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.216 1+0 records in 00:18:03.216 1+0 records out 00:18:03.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412571 s, 9.9 MB/s 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:03.216 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:03.477 /dev/nbd1 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.477 1+0 records in 00:18:03.477 1+0 records out 00:18:03.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430173 s, 9.5 MB/s 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:03.477 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:03.737 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:03.737 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.737 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:03.737 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:03.737 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:03.737 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.737 17:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:03.995 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:03.995 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:03.995 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:03.995 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.995 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.995 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:03.995 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:03.995 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.995 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.996 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:03.996 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.255 [2024-12-07 17:34:37.403784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:04.255 [2024-12-07 17:34:37.403838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.255 [2024-12-07 17:34:37.403860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:04.255 [2024-12-07 17:34:37.403870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.255 [2024-12-07 17:34:37.405893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.255 [2024-12-07 17:34:37.405954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:04.255 [2024-12-07 17:34:37.406016] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:04.255 [2024-12-07 17:34:37.406068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.255 [2024-12-07 17:34:37.406212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.255 spare 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.255 [2024-12-07 17:34:37.506115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:04.255 [2024-12-07 17:34:37.506145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:04.255 [2024-12-07 17:34:37.506248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:04.255 [2024-12-07 17:34:37.506381] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:04.255 [2024-12-07 17:34:37.506394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:04.255 [2024-12-07 17:34:37.506533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.255 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.255 "name": "raid_bdev1", 00:18:04.255 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:04.255 "strip_size_kb": 0, 00:18:04.255 "state": "online", 00:18:04.255 "raid_level": "raid1", 00:18:04.255 "superblock": true, 00:18:04.255 "num_base_bdevs": 2, 00:18:04.255 "num_base_bdevs_discovered": 2, 00:18:04.255 "num_base_bdevs_operational": 2, 00:18:04.255 "base_bdevs_list": [ 00:18:04.255 { 00:18:04.255 "name": "spare", 00:18:04.255 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:18:04.255 "is_configured": true, 00:18:04.256 "data_offset": 256, 00:18:04.256 "data_size": 7936 00:18:04.256 }, 00:18:04.256 { 00:18:04.256 "name": "BaseBdev2", 00:18:04.256 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:04.256 "is_configured": true, 00:18:04.256 "data_offset": 256, 00:18:04.256 "data_size": 7936 00:18:04.256 } 00:18:04.256 ] 00:18:04.256 }' 00:18:04.256 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.256 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.825 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.825 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.825 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.825 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.825 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.825 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.826 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.826 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.826 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.826 17:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.826 "name": "raid_bdev1", 00:18:04.826 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:04.826 "strip_size_kb": 0, 00:18:04.826 "state": "online", 00:18:04.826 "raid_level": "raid1", 00:18:04.826 "superblock": true, 00:18:04.826 "num_base_bdevs": 2, 00:18:04.826 "num_base_bdevs_discovered": 2, 00:18:04.826 "num_base_bdevs_operational": 2, 00:18:04.826 "base_bdevs_list": [ 00:18:04.826 { 00:18:04.826 "name": "spare", 00:18:04.826 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:18:04.826 "is_configured": true, 00:18:04.826 "data_offset": 256, 00:18:04.826 "data_size": 7936 00:18:04.826 }, 00:18:04.826 { 00:18:04.826 "name": "BaseBdev2", 00:18:04.826 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:04.826 "is_configured": true, 00:18:04.826 "data_offset": 256, 00:18:04.826 "data_size": 7936 00:18:04.826 } 00:18:04.826 ] 00:18:04.826 }' 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.826 [2024-12-07 17:34:38.142633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.826 "name": "raid_bdev1", 00:18:04.826 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:04.826 "strip_size_kb": 0, 00:18:04.826 "state": "online", 00:18:04.826 "raid_level": "raid1", 00:18:04.826 "superblock": true, 00:18:04.826 "num_base_bdevs": 2, 00:18:04.826 "num_base_bdevs_discovered": 1, 00:18:04.826 "num_base_bdevs_operational": 1, 00:18:04.826 "base_bdevs_list": [ 00:18:04.826 { 00:18:04.826 "name": null, 00:18:04.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.826 "is_configured": false, 00:18:04.826 "data_offset": 0, 00:18:04.826 "data_size": 7936 00:18:04.826 }, 00:18:04.826 { 00:18:04.826 "name": "BaseBdev2", 00:18:04.826 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:04.826 "is_configured": true, 00:18:04.826 "data_offset": 256, 00:18:04.826 "data_size": 7936 00:18:04.826 } 00:18:04.826 ] 00:18:04.826 }' 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.826 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.396 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:05.396 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.396 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.396 [2024-12-07 17:34:38.578120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.396 [2024-12-07 17:34:38.578304] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:05.396 [2024-12-07 17:34:38.578330] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:05.396 [2024-12-07 17:34:38.578364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.396 [2024-12-07 17:34:38.591672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:05.396 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.396 17:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:05.396 [2024-12-07 17:34:38.593491] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.335 "name": "raid_bdev1", 00:18:06.335 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:06.335 "strip_size_kb": 0, 00:18:06.335 "state": "online", 00:18:06.335 "raid_level": "raid1", 00:18:06.335 "superblock": true, 00:18:06.335 "num_base_bdevs": 2, 00:18:06.335 "num_base_bdevs_discovered": 2, 00:18:06.335 "num_base_bdevs_operational": 2, 00:18:06.335 "process": { 00:18:06.335 "type": "rebuild", 00:18:06.335 "target": "spare", 00:18:06.335 "progress": { 00:18:06.335 "blocks": 2560, 00:18:06.335 "percent": 32 00:18:06.335 } 00:18:06.335 }, 00:18:06.335 "base_bdevs_list": [ 00:18:06.335 { 00:18:06.335 "name": "spare", 00:18:06.335 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:18:06.335 "is_configured": true, 00:18:06.335 "data_offset": 256, 00:18:06.335 "data_size": 7936 00:18:06.335 }, 00:18:06.335 { 00:18:06.335 "name": "BaseBdev2", 00:18:06.335 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:06.335 "is_configured": true, 00:18:06.335 "data_offset": 256, 00:18:06.335 "data_size": 7936 00:18:06.335 } 00:18:06.335 ] 00:18:06.335 }' 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.335 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.595 [2024-12-07 17:34:39.737526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.595 [2024-12-07 17:34:39.798413] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.595 [2024-12-07 17:34:39.798473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.595 [2024-12-07 17:34:39.798504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.595 [2024-12-07 17:34:39.798524] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.595 "name": "raid_bdev1", 00:18:06.595 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:06.595 "strip_size_kb": 0, 00:18:06.595 "state": "online", 00:18:06.595 "raid_level": "raid1", 00:18:06.595 "superblock": true, 00:18:06.595 "num_base_bdevs": 2, 00:18:06.595 "num_base_bdevs_discovered": 1, 00:18:06.595 "num_base_bdevs_operational": 1, 00:18:06.595 "base_bdevs_list": [ 00:18:06.595 { 00:18:06.595 "name": null, 00:18:06.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.595 "is_configured": false, 00:18:06.595 "data_offset": 0, 00:18:06.595 "data_size": 7936 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "name": "BaseBdev2", 00:18:06.595 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:06.595 "is_configured": true, 00:18:06.595 "data_offset": 256, 00:18:06.595 "data_size": 7936 00:18:06.595 } 00:18:06.595 ] 00:18:06.595 }' 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.595 17:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.165 17:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:07.165 17:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.165 17:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.165 [2024-12-07 17:34:40.269490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:07.165 [2024-12-07 17:34:40.269560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.165 [2024-12-07 17:34:40.269589] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:07.165 [2024-12-07 17:34:40.269601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.165 [2024-12-07 17:34:40.269879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.165 [2024-12-07 17:34:40.269906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:07.165 [2024-12-07 17:34:40.269982] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:07.165 [2024-12-07 17:34:40.269998] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.165 [2024-12-07 17:34:40.270008] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:07.165 [2024-12-07 17:34:40.270030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.165 [2024-12-07 17:34:40.283542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:07.165 spare 00:18:07.165 17:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.165 17:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:07.165 [2024-12-07 17:34:40.285363] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.104 "name": "raid_bdev1", 00:18:08.104 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:08.104 "strip_size_kb": 0, 00:18:08.104 "state": "online", 00:18:08.104 "raid_level": "raid1", 00:18:08.104 "superblock": true, 00:18:08.104 "num_base_bdevs": 2, 00:18:08.104 "num_base_bdevs_discovered": 2, 00:18:08.104 "num_base_bdevs_operational": 2, 00:18:08.104 "process": { 00:18:08.104 "type": "rebuild", 00:18:08.104 "target": "spare", 00:18:08.104 "progress": { 00:18:08.104 "blocks": 2560, 00:18:08.104 "percent": 32 00:18:08.104 } 00:18:08.104 }, 00:18:08.104 "base_bdevs_list": [ 00:18:08.104 { 00:18:08.104 "name": "spare", 00:18:08.104 "uuid": "147da9c1-c2d5-5d25-bd19-3a59c7233ca3", 00:18:08.104 "is_configured": true, 00:18:08.104 "data_offset": 256, 00:18:08.104 "data_size": 7936 00:18:08.104 }, 00:18:08.104 { 00:18:08.104 "name": "BaseBdev2", 00:18:08.104 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:08.104 "is_configured": true, 00:18:08.104 "data_offset": 256, 00:18:08.104 "data_size": 7936 00:18:08.104 } 00:18:08.104 ] 00:18:08.104 }' 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.104 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.104 [2024-12-07 17:34:41.449047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.363 [2024-12-07 17:34:41.490530] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:08.363 [2024-12-07 17:34:41.490599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.363 [2024-12-07 17:34:41.490616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.363 [2024-12-07 17:34:41.490623] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.363 "name": "raid_bdev1", 00:18:08.363 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:08.363 "strip_size_kb": 0, 00:18:08.363 "state": "online", 00:18:08.363 "raid_level": "raid1", 00:18:08.363 "superblock": true, 00:18:08.363 "num_base_bdevs": 2, 00:18:08.363 "num_base_bdevs_discovered": 1, 00:18:08.363 "num_base_bdevs_operational": 1, 00:18:08.363 "base_bdevs_list": [ 00:18:08.363 { 00:18:08.363 "name": null, 00:18:08.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.363 "is_configured": false, 00:18:08.363 "data_offset": 0, 00:18:08.363 "data_size": 7936 00:18:08.363 }, 00:18:08.363 { 00:18:08.363 "name": "BaseBdev2", 00:18:08.363 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:08.363 "is_configured": true, 00:18:08.363 "data_offset": 256, 00:18:08.363 "data_size": 7936 00:18:08.363 } 00:18:08.363 ] 00:18:08.363 }' 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.363 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.622 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.622 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.622 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.622 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.622 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.622 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.622 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.622 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.622 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.622 17:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.881 "name": "raid_bdev1", 00:18:08.881 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:08.881 "strip_size_kb": 0, 00:18:08.881 "state": "online", 00:18:08.881 "raid_level": "raid1", 00:18:08.881 "superblock": true, 00:18:08.881 "num_base_bdevs": 2, 00:18:08.881 "num_base_bdevs_discovered": 1, 00:18:08.881 "num_base_bdevs_operational": 1, 00:18:08.881 "base_bdevs_list": [ 00:18:08.881 { 00:18:08.881 "name": null, 00:18:08.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.881 "is_configured": false, 00:18:08.881 "data_offset": 0, 00:18:08.881 "data_size": 7936 00:18:08.881 }, 00:18:08.881 { 00:18:08.881 "name": "BaseBdev2", 00:18:08.881 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:08.881 "is_configured": true, 00:18:08.881 "data_offset": 256, 00:18:08.881 "data_size": 7936 00:18:08.881 } 00:18:08.881 ] 00:18:08.881 }' 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.881 [2024-12-07 17:34:42.129168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:08.881 [2024-12-07 17:34:42.129226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.881 [2024-12-07 17:34:42.129247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:08.881 [2024-12-07 17:34:42.129257] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.881 [2024-12-07 17:34:42.129511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.881 [2024-12-07 17:34:42.129532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:08.881 [2024-12-07 17:34:42.129586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:08.881 [2024-12-07 17:34:42.129599] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:08.881 [2024-12-07 17:34:42.129611] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:08.881 [2024-12-07 17:34:42.129621] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:08.881 BaseBdev1 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.881 17:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.820 "name": "raid_bdev1", 00:18:09.820 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:09.820 "strip_size_kb": 0, 00:18:09.820 "state": "online", 00:18:09.820 "raid_level": "raid1", 00:18:09.820 "superblock": true, 00:18:09.820 "num_base_bdevs": 2, 00:18:09.820 "num_base_bdevs_discovered": 1, 00:18:09.820 "num_base_bdevs_operational": 1, 00:18:09.820 "base_bdevs_list": [ 00:18:09.820 { 00:18:09.820 "name": null, 00:18:09.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.820 "is_configured": false, 00:18:09.820 "data_offset": 0, 00:18:09.820 "data_size": 7936 00:18:09.820 }, 00:18:09.820 { 00:18:09.820 "name": "BaseBdev2", 00:18:09.820 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:09.820 "is_configured": true, 00:18:09.820 "data_offset": 256, 00:18:09.820 "data_size": 7936 00:18:09.820 } 00:18:09.820 ] 00:18:09.820 }' 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.820 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.390 "name": "raid_bdev1", 00:18:10.390 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:10.390 "strip_size_kb": 0, 00:18:10.390 "state": "online", 00:18:10.390 "raid_level": "raid1", 00:18:10.390 "superblock": true, 00:18:10.390 "num_base_bdevs": 2, 00:18:10.390 "num_base_bdevs_discovered": 1, 00:18:10.390 "num_base_bdevs_operational": 1, 00:18:10.390 "base_bdevs_list": [ 00:18:10.390 { 00:18:10.390 "name": null, 00:18:10.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.390 "is_configured": false, 00:18:10.390 "data_offset": 0, 00:18:10.390 "data_size": 7936 00:18:10.390 }, 00:18:10.390 { 00:18:10.390 "name": "BaseBdev2", 00:18:10.390 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:10.390 "is_configured": true, 00:18:10.390 "data_offset": 256, 00:18:10.390 "data_size": 7936 00:18:10.390 } 00:18:10.390 ] 00:18:10.390 }' 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.390 [2024-12-07 17:34:43.742691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.390 [2024-12-07 17:34:43.742880] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:10.390 [2024-12-07 17:34:43.742903] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:10.390 request: 00:18:10.390 { 00:18:10.390 "base_bdev": "BaseBdev1", 00:18:10.390 "raid_bdev": "raid_bdev1", 00:18:10.390 "method": "bdev_raid_add_base_bdev", 00:18:10.390 "req_id": 1 00:18:10.390 } 00:18:10.390 Got JSON-RPC error response 00:18:10.390 response: 00:18:10.390 { 00:18:10.390 "code": -22, 00:18:10.390 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:10.390 } 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:10.390 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.391 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.391 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.391 17:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.772 "name": "raid_bdev1", 00:18:11.772 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:11.772 "strip_size_kb": 0, 00:18:11.772 "state": "online", 00:18:11.772 "raid_level": "raid1", 00:18:11.772 "superblock": true, 00:18:11.772 "num_base_bdevs": 2, 00:18:11.772 "num_base_bdevs_discovered": 1, 00:18:11.772 "num_base_bdevs_operational": 1, 00:18:11.772 "base_bdevs_list": [ 00:18:11.772 { 00:18:11.772 "name": null, 00:18:11.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.772 "is_configured": false, 00:18:11.772 "data_offset": 0, 00:18:11.772 "data_size": 7936 00:18:11.772 }, 00:18:11.772 { 00:18:11.772 "name": "BaseBdev2", 00:18:11.772 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:11.772 "is_configured": true, 00:18:11.772 "data_offset": 256, 00:18:11.772 "data_size": 7936 00:18:11.772 } 00:18:11.772 ] 00:18:11.772 }' 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.772 17:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.032 "name": "raid_bdev1", 00:18:12.032 "uuid": "44318acc-cf36-49e9-96e9-e86012b13412", 00:18:12.032 "strip_size_kb": 0, 00:18:12.032 "state": "online", 00:18:12.032 "raid_level": "raid1", 00:18:12.032 "superblock": true, 00:18:12.032 "num_base_bdevs": 2, 00:18:12.032 "num_base_bdevs_discovered": 1, 00:18:12.032 "num_base_bdevs_operational": 1, 00:18:12.032 "base_bdevs_list": [ 00:18:12.032 { 00:18:12.032 "name": null, 00:18:12.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.032 "is_configured": false, 00:18:12.032 "data_offset": 0, 00:18:12.032 "data_size": 7936 00:18:12.032 }, 00:18:12.032 { 00:18:12.032 "name": "BaseBdev2", 00:18:12.032 "uuid": "a6a01921-b284-54ee-8da2-d162c851c4f8", 00:18:12.032 "is_configured": true, 00:18:12.032 "data_offset": 256, 00:18:12.032 "data_size": 7936 00:18:12.032 } 00:18:12.032 ] 00:18:12.032 }' 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87750 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87750 ']' 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87750 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87750 00:18:12.032 killing process with pid 87750 00:18:12.032 Received shutdown signal, test time was about 60.000000 seconds 00:18:12.032 00:18:12.032 Latency(us) 00:18:12.032 [2024-12-07T17:34:45.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.032 [2024-12-07T17:34:45.414Z] =================================================================================================================== 00:18:12.032 [2024-12-07T17:34:45.414Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87750' 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87750 00:18:12.032 [2024-12-07 17:34:45.366083] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:12.032 [2024-12-07 17:34:45.366200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.032 [2024-12-07 17:34:45.366250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.032 17:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87750 00:18:12.032 [2024-12-07 17:34:45.366261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:12.599 [2024-12-07 17:34:45.672922] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:13.537 17:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:13.537 00:18:13.537 real 0m19.563s 00:18:13.537 user 0m25.637s 00:18:13.537 sys 0m2.514s 00:18:13.537 17:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.537 ************************************ 00:18:13.537 END TEST raid_rebuild_test_sb_md_separate 00:18:13.537 17:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 ************************************ 00:18:13.537 17:34:46 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:13.537 17:34:46 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:13.537 17:34:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:13.537 17:34:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.537 17:34:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 ************************************ 00:18:13.537 START TEST raid_state_function_test_sb_md_interleaved 00:18:13.537 ************************************ 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88436 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:13.537 Process raid pid: 88436 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88436' 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88436 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88436 ']' 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.537 17:34:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 [2024-12-07 17:34:46.893496] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:13.537 [2024-12-07 17:34:46.893606] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.796 [2024-12-07 17:34:47.065843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.796 [2024-12-07 17:34:47.175954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.054 [2024-12-07 17:34:47.363425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.054 [2024-12-07 17:34:47.363463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.623 [2024-12-07 17:34:47.735849] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.623 [2024-12-07 17:34:47.735905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.623 [2024-12-07 17:34:47.735915] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.623 [2024-12-07 17:34:47.735925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.623 "name": "Existed_Raid", 00:18:14.623 "uuid": "d89de53a-2d30-4f2b-9a2b-8e695818e778", 00:18:14.623 "strip_size_kb": 0, 00:18:14.623 "state": "configuring", 00:18:14.623 "raid_level": "raid1", 00:18:14.623 "superblock": true, 00:18:14.623 "num_base_bdevs": 2, 00:18:14.623 "num_base_bdevs_discovered": 0, 00:18:14.623 "num_base_bdevs_operational": 2, 00:18:14.623 "base_bdevs_list": [ 00:18:14.623 { 00:18:14.623 "name": "BaseBdev1", 00:18:14.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.623 "is_configured": false, 00:18:14.623 "data_offset": 0, 00:18:14.623 "data_size": 0 00:18:14.623 }, 00:18:14.623 { 00:18:14.623 "name": "BaseBdev2", 00:18:14.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.623 "is_configured": false, 00:18:14.623 "data_offset": 0, 00:18:14.623 "data_size": 0 00:18:14.623 } 00:18:14.623 ] 00:18:14.623 }' 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.623 17:34:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.883 [2024-12-07 17:34:48.222997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:14.883 [2024-12-07 17:34:48.223031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.883 [2024-12-07 17:34:48.234974] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.883 [2024-12-07 17:34:48.235014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.883 [2024-12-07 17:34:48.235022] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.883 [2024-12-07 17:34:48.235033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.883 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.145 [2024-12-07 17:34:48.281439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.145 BaseBdev1 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.146 [ 00:18:15.146 { 00:18:15.146 "name": "BaseBdev1", 00:18:15.146 "aliases": [ 00:18:15.146 "ab6ed949-ad48-4de4-8eae-575a7ff98b31" 00:18:15.146 ], 00:18:15.146 "product_name": "Malloc disk", 00:18:15.146 "block_size": 4128, 00:18:15.146 "num_blocks": 8192, 00:18:15.146 "uuid": "ab6ed949-ad48-4de4-8eae-575a7ff98b31", 00:18:15.146 "md_size": 32, 00:18:15.146 "md_interleave": true, 00:18:15.146 "dif_type": 0, 00:18:15.146 "assigned_rate_limits": { 00:18:15.146 "rw_ios_per_sec": 0, 00:18:15.146 "rw_mbytes_per_sec": 0, 00:18:15.146 "r_mbytes_per_sec": 0, 00:18:15.146 "w_mbytes_per_sec": 0 00:18:15.146 }, 00:18:15.146 "claimed": true, 00:18:15.146 "claim_type": "exclusive_write", 00:18:15.146 "zoned": false, 00:18:15.146 "supported_io_types": { 00:18:15.146 "read": true, 00:18:15.146 "write": true, 00:18:15.146 "unmap": true, 00:18:15.146 "flush": true, 00:18:15.146 "reset": true, 00:18:15.146 "nvme_admin": false, 00:18:15.146 "nvme_io": false, 00:18:15.146 "nvme_io_md": false, 00:18:15.146 "write_zeroes": true, 00:18:15.146 "zcopy": true, 00:18:15.146 "get_zone_info": false, 00:18:15.146 "zone_management": false, 00:18:15.146 "zone_append": false, 00:18:15.146 "compare": false, 00:18:15.146 "compare_and_write": false, 00:18:15.146 "abort": true, 00:18:15.146 "seek_hole": false, 00:18:15.146 "seek_data": false, 00:18:15.146 "copy": true, 00:18:15.146 "nvme_iov_md": false 00:18:15.146 }, 00:18:15.146 "memory_domains": [ 00:18:15.146 { 00:18:15.146 "dma_device_id": "system", 00:18:15.146 "dma_device_type": 1 00:18:15.146 }, 00:18:15.146 { 00:18:15.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.146 "dma_device_type": 2 00:18:15.146 } 00:18:15.146 ], 00:18:15.146 "driver_specific": {} 00:18:15.146 } 00:18:15.146 ] 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.146 "name": "Existed_Raid", 00:18:15.146 "uuid": "8eec2dda-5969-4c3f-93cc-044d85daa35f", 00:18:15.146 "strip_size_kb": 0, 00:18:15.146 "state": "configuring", 00:18:15.146 "raid_level": "raid1", 00:18:15.146 "superblock": true, 00:18:15.146 "num_base_bdevs": 2, 00:18:15.146 "num_base_bdevs_discovered": 1, 00:18:15.146 "num_base_bdevs_operational": 2, 00:18:15.146 "base_bdevs_list": [ 00:18:15.146 { 00:18:15.146 "name": "BaseBdev1", 00:18:15.146 "uuid": "ab6ed949-ad48-4de4-8eae-575a7ff98b31", 00:18:15.146 "is_configured": true, 00:18:15.146 "data_offset": 256, 00:18:15.146 "data_size": 7936 00:18:15.146 }, 00:18:15.146 { 00:18:15.146 "name": "BaseBdev2", 00:18:15.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.146 "is_configured": false, 00:18:15.146 "data_offset": 0, 00:18:15.146 "data_size": 0 00:18:15.146 } 00:18:15.146 ] 00:18:15.146 }' 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.146 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.418 [2024-12-07 17:34:48.776713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.418 [2024-12-07 17:34:48.776767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.418 [2024-12-07 17:34:48.788736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.418 [2024-12-07 17:34:48.790539] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.418 [2024-12-07 17:34:48.790666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.418 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.419 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.419 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.419 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.419 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.419 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.691 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.691 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.691 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.691 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.691 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.691 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.691 "name": "Existed_Raid", 00:18:15.691 "uuid": "0a1bdc20-85a1-4747-9b19-d3596396315a", 00:18:15.691 "strip_size_kb": 0, 00:18:15.691 "state": "configuring", 00:18:15.691 "raid_level": "raid1", 00:18:15.691 "superblock": true, 00:18:15.691 "num_base_bdevs": 2, 00:18:15.691 "num_base_bdevs_discovered": 1, 00:18:15.691 "num_base_bdevs_operational": 2, 00:18:15.691 "base_bdevs_list": [ 00:18:15.691 { 00:18:15.691 "name": "BaseBdev1", 00:18:15.691 "uuid": "ab6ed949-ad48-4de4-8eae-575a7ff98b31", 00:18:15.691 "is_configured": true, 00:18:15.691 "data_offset": 256, 00:18:15.691 "data_size": 7936 00:18:15.691 }, 00:18:15.691 { 00:18:15.691 "name": "BaseBdev2", 00:18:15.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.691 "is_configured": false, 00:18:15.691 "data_offset": 0, 00:18:15.691 "data_size": 0 00:18:15.691 } 00:18:15.691 ] 00:18:15.691 }' 00:18:15.691 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.691 17:34:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.965 [2024-12-07 17:34:49.317965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.965 [2024-12-07 17:34:49.318258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:15.965 [2024-12-07 17:34:49.318309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:15.965 [2024-12-07 17:34:49.318408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:15.965 [2024-12-07 17:34:49.318510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:15.965 [2024-12-07 17:34:49.318547] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:15.965 [2024-12-07 17:34:49.318642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.965 BaseBdev2 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.965 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.226 [ 00:18:16.226 { 00:18:16.226 "name": "BaseBdev2", 00:18:16.226 "aliases": [ 00:18:16.226 "1fe42661-f20a-4b40-8827-46939682739a" 00:18:16.226 ], 00:18:16.226 "product_name": "Malloc disk", 00:18:16.226 "block_size": 4128, 00:18:16.226 "num_blocks": 8192, 00:18:16.226 "uuid": "1fe42661-f20a-4b40-8827-46939682739a", 00:18:16.226 "md_size": 32, 00:18:16.226 "md_interleave": true, 00:18:16.226 "dif_type": 0, 00:18:16.226 "assigned_rate_limits": { 00:18:16.226 "rw_ios_per_sec": 0, 00:18:16.226 "rw_mbytes_per_sec": 0, 00:18:16.226 "r_mbytes_per_sec": 0, 00:18:16.226 "w_mbytes_per_sec": 0 00:18:16.226 }, 00:18:16.226 "claimed": true, 00:18:16.226 "claim_type": "exclusive_write", 00:18:16.226 "zoned": false, 00:18:16.226 "supported_io_types": { 00:18:16.226 "read": true, 00:18:16.226 "write": true, 00:18:16.226 "unmap": true, 00:18:16.226 "flush": true, 00:18:16.226 "reset": true, 00:18:16.226 "nvme_admin": false, 00:18:16.226 "nvme_io": false, 00:18:16.226 "nvme_io_md": false, 00:18:16.226 "write_zeroes": true, 00:18:16.226 "zcopy": true, 00:18:16.226 "get_zone_info": false, 00:18:16.226 "zone_management": false, 00:18:16.226 "zone_append": false, 00:18:16.226 "compare": false, 00:18:16.226 "compare_and_write": false, 00:18:16.226 "abort": true, 00:18:16.226 "seek_hole": false, 00:18:16.226 "seek_data": false, 00:18:16.226 "copy": true, 00:18:16.226 "nvme_iov_md": false 00:18:16.226 }, 00:18:16.226 "memory_domains": [ 00:18:16.226 { 00:18:16.226 "dma_device_id": "system", 00:18:16.226 "dma_device_type": 1 00:18:16.226 }, 00:18:16.226 { 00:18:16.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.226 "dma_device_type": 2 00:18:16.226 } 00:18:16.226 ], 00:18:16.226 "driver_specific": {} 00:18:16.226 } 00:18:16.226 ] 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.226 "name": "Existed_Raid", 00:18:16.226 "uuid": "0a1bdc20-85a1-4747-9b19-d3596396315a", 00:18:16.226 "strip_size_kb": 0, 00:18:16.226 "state": "online", 00:18:16.226 "raid_level": "raid1", 00:18:16.226 "superblock": true, 00:18:16.226 "num_base_bdevs": 2, 00:18:16.226 "num_base_bdevs_discovered": 2, 00:18:16.226 "num_base_bdevs_operational": 2, 00:18:16.226 "base_bdevs_list": [ 00:18:16.226 { 00:18:16.226 "name": "BaseBdev1", 00:18:16.226 "uuid": "ab6ed949-ad48-4de4-8eae-575a7ff98b31", 00:18:16.226 "is_configured": true, 00:18:16.226 "data_offset": 256, 00:18:16.226 "data_size": 7936 00:18:16.226 }, 00:18:16.226 { 00:18:16.226 "name": "BaseBdev2", 00:18:16.226 "uuid": "1fe42661-f20a-4b40-8827-46939682739a", 00:18:16.226 "is_configured": true, 00:18:16.226 "data_offset": 256, 00:18:16.226 "data_size": 7936 00:18:16.226 } 00:18:16.226 ] 00:18:16.226 }' 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.226 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:16.487 [2024-12-07 17:34:49.817439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:16.487 "name": "Existed_Raid", 00:18:16.487 "aliases": [ 00:18:16.487 "0a1bdc20-85a1-4747-9b19-d3596396315a" 00:18:16.487 ], 00:18:16.487 "product_name": "Raid Volume", 00:18:16.487 "block_size": 4128, 00:18:16.487 "num_blocks": 7936, 00:18:16.487 "uuid": "0a1bdc20-85a1-4747-9b19-d3596396315a", 00:18:16.487 "md_size": 32, 00:18:16.487 "md_interleave": true, 00:18:16.487 "dif_type": 0, 00:18:16.487 "assigned_rate_limits": { 00:18:16.487 "rw_ios_per_sec": 0, 00:18:16.487 "rw_mbytes_per_sec": 0, 00:18:16.487 "r_mbytes_per_sec": 0, 00:18:16.487 "w_mbytes_per_sec": 0 00:18:16.487 }, 00:18:16.487 "claimed": false, 00:18:16.487 "zoned": false, 00:18:16.487 "supported_io_types": { 00:18:16.487 "read": true, 00:18:16.487 "write": true, 00:18:16.487 "unmap": false, 00:18:16.487 "flush": false, 00:18:16.487 "reset": true, 00:18:16.487 "nvme_admin": false, 00:18:16.487 "nvme_io": false, 00:18:16.487 "nvme_io_md": false, 00:18:16.487 "write_zeroes": true, 00:18:16.487 "zcopy": false, 00:18:16.487 "get_zone_info": false, 00:18:16.487 "zone_management": false, 00:18:16.487 "zone_append": false, 00:18:16.487 "compare": false, 00:18:16.487 "compare_and_write": false, 00:18:16.487 "abort": false, 00:18:16.487 "seek_hole": false, 00:18:16.487 "seek_data": false, 00:18:16.487 "copy": false, 00:18:16.487 "nvme_iov_md": false 00:18:16.487 }, 00:18:16.487 "memory_domains": [ 00:18:16.487 { 00:18:16.487 "dma_device_id": "system", 00:18:16.487 "dma_device_type": 1 00:18:16.487 }, 00:18:16.487 { 00:18:16.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.487 "dma_device_type": 2 00:18:16.487 }, 00:18:16.487 { 00:18:16.487 "dma_device_id": "system", 00:18:16.487 "dma_device_type": 1 00:18:16.487 }, 00:18:16.487 { 00:18:16.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.487 "dma_device_type": 2 00:18:16.487 } 00:18:16.487 ], 00:18:16.487 "driver_specific": { 00:18:16.487 "raid": { 00:18:16.487 "uuid": "0a1bdc20-85a1-4747-9b19-d3596396315a", 00:18:16.487 "strip_size_kb": 0, 00:18:16.487 "state": "online", 00:18:16.487 "raid_level": "raid1", 00:18:16.487 "superblock": true, 00:18:16.487 "num_base_bdevs": 2, 00:18:16.487 "num_base_bdevs_discovered": 2, 00:18:16.487 "num_base_bdevs_operational": 2, 00:18:16.487 "base_bdevs_list": [ 00:18:16.487 { 00:18:16.487 "name": "BaseBdev1", 00:18:16.487 "uuid": "ab6ed949-ad48-4de4-8eae-575a7ff98b31", 00:18:16.487 "is_configured": true, 00:18:16.487 "data_offset": 256, 00:18:16.487 "data_size": 7936 00:18:16.487 }, 00:18:16.487 { 00:18:16.487 "name": "BaseBdev2", 00:18:16.487 "uuid": "1fe42661-f20a-4b40-8827-46939682739a", 00:18:16.487 "is_configured": true, 00:18:16.487 "data_offset": 256, 00:18:16.487 "data_size": 7936 00:18:16.487 } 00:18:16.487 ] 00:18:16.487 } 00:18:16.487 } 00:18:16.487 }' 00:18:16.487 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:16.748 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:16.748 BaseBdev2' 00:18:16.748 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.748 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:16.748 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.748 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:16.748 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.748 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.748 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.748 17:34:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.748 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.748 [2024-12-07 17:34:50.060780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.008 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.008 "name": "Existed_Raid", 00:18:17.008 "uuid": "0a1bdc20-85a1-4747-9b19-d3596396315a", 00:18:17.009 "strip_size_kb": 0, 00:18:17.009 "state": "online", 00:18:17.009 "raid_level": "raid1", 00:18:17.009 "superblock": true, 00:18:17.009 "num_base_bdevs": 2, 00:18:17.009 "num_base_bdevs_discovered": 1, 00:18:17.009 "num_base_bdevs_operational": 1, 00:18:17.009 "base_bdevs_list": [ 00:18:17.009 { 00:18:17.009 "name": null, 00:18:17.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.009 "is_configured": false, 00:18:17.009 "data_offset": 0, 00:18:17.009 "data_size": 7936 00:18:17.009 }, 00:18:17.009 { 00:18:17.009 "name": "BaseBdev2", 00:18:17.009 "uuid": "1fe42661-f20a-4b40-8827-46939682739a", 00:18:17.009 "is_configured": true, 00:18:17.009 "data_offset": 256, 00:18:17.009 "data_size": 7936 00:18:17.009 } 00:18:17.009 ] 00:18:17.009 }' 00:18:17.009 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.009 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.269 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 [2024-12-07 17:34:50.588742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:17.269 [2024-12-07 17:34:50.588895] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.529 [2024-12-07 17:34:50.694631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.529 [2024-12-07 17:34:50.694701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.529 [2024-12-07 17:34:50.694718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88436 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88436 ']' 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88436 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88436 00:18:17.529 killing process with pid 88436 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88436' 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88436 00:18:17.529 [2024-12-07 17:34:50.778963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.529 17:34:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88436 00:18:17.529 [2024-12-07 17:34:50.796626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.920 17:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:18.920 00:18:18.920 real 0m5.207s 00:18:18.920 user 0m7.461s 00:18:18.920 sys 0m0.847s 00:18:18.920 ************************************ 00:18:18.920 END TEST raid_state_function_test_sb_md_interleaved 00:18:18.920 ************************************ 00:18:18.920 17:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.920 17:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.920 17:34:52 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:18.920 17:34:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:18.920 17:34:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.920 17:34:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.920 ************************************ 00:18:18.920 START TEST raid_superblock_test_md_interleaved 00:18:18.920 ************************************ 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88688 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88688 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88688 ']' 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.920 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.920 [2024-12-07 17:34:52.178106] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:18.920 [2024-12-07 17:34:52.178293] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88688 ] 00:18:19.180 [2024-12-07 17:34:52.375660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.180 [2024-12-07 17:34:52.511561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.439 [2024-12-07 17:34:52.741208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.439 [2024-12-07 17:34:52.741252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.698 17:34:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.698 malloc1 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.698 [2024-12-07 17:34:53.056138] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:19.698 [2024-12-07 17:34:53.056214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.698 [2024-12-07 17:34:53.056240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:19.698 [2024-12-07 17:34:53.056251] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.698 [2024-12-07 17:34:53.058291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.698 [2024-12-07 17:34:53.058328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:19.698 pt1 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.698 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.958 malloc2 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.958 [2024-12-07 17:34:53.117990] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:19.958 [2024-12-07 17:34:53.118052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.958 [2024-12-07 17:34:53.118077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:19.958 [2024-12-07 17:34:53.118087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.958 [2024-12-07 17:34:53.120074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.958 [2024-12-07 17:34:53.120112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:19.958 pt2 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.958 [2024-12-07 17:34:53.130009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:19.958 [2024-12-07 17:34:53.131959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:19.958 [2024-12-07 17:34:53.132161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:19.958 [2024-12-07 17:34:53.132183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:19.958 [2024-12-07 17:34:53.132262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:19.958 [2024-12-07 17:34:53.132346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:19.958 [2024-12-07 17:34:53.132365] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:19.958 [2024-12-07 17:34:53.132438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.958 "name": "raid_bdev1", 00:18:19.958 "uuid": "dd3fc78d-e17c-4720-ada3-00c7fd757b6e", 00:18:19.958 "strip_size_kb": 0, 00:18:19.958 "state": "online", 00:18:19.958 "raid_level": "raid1", 00:18:19.958 "superblock": true, 00:18:19.958 "num_base_bdevs": 2, 00:18:19.958 "num_base_bdevs_discovered": 2, 00:18:19.958 "num_base_bdevs_operational": 2, 00:18:19.958 "base_bdevs_list": [ 00:18:19.958 { 00:18:19.958 "name": "pt1", 00:18:19.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.958 "is_configured": true, 00:18:19.958 "data_offset": 256, 00:18:19.958 "data_size": 7936 00:18:19.958 }, 00:18:19.958 { 00:18:19.958 "name": "pt2", 00:18:19.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.958 "is_configured": true, 00:18:19.958 "data_offset": 256, 00:18:19.958 "data_size": 7936 00:18:19.958 } 00:18:19.958 ] 00:18:19.958 }' 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.958 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.528 [2024-12-07 17:34:53.613374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.528 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:20.528 "name": "raid_bdev1", 00:18:20.528 "aliases": [ 00:18:20.528 "dd3fc78d-e17c-4720-ada3-00c7fd757b6e" 00:18:20.528 ], 00:18:20.528 "product_name": "Raid Volume", 00:18:20.528 "block_size": 4128, 00:18:20.528 "num_blocks": 7936, 00:18:20.528 "uuid": "dd3fc78d-e17c-4720-ada3-00c7fd757b6e", 00:18:20.528 "md_size": 32, 00:18:20.528 "md_interleave": true, 00:18:20.528 "dif_type": 0, 00:18:20.528 "assigned_rate_limits": { 00:18:20.528 "rw_ios_per_sec": 0, 00:18:20.528 "rw_mbytes_per_sec": 0, 00:18:20.528 "r_mbytes_per_sec": 0, 00:18:20.528 "w_mbytes_per_sec": 0 00:18:20.528 }, 00:18:20.528 "claimed": false, 00:18:20.528 "zoned": false, 00:18:20.528 "supported_io_types": { 00:18:20.528 "read": true, 00:18:20.528 "write": true, 00:18:20.528 "unmap": false, 00:18:20.528 "flush": false, 00:18:20.528 "reset": true, 00:18:20.528 "nvme_admin": false, 00:18:20.528 "nvme_io": false, 00:18:20.528 "nvme_io_md": false, 00:18:20.528 "write_zeroes": true, 00:18:20.528 "zcopy": false, 00:18:20.529 "get_zone_info": false, 00:18:20.529 "zone_management": false, 00:18:20.529 "zone_append": false, 00:18:20.529 "compare": false, 00:18:20.529 "compare_and_write": false, 00:18:20.529 "abort": false, 00:18:20.529 "seek_hole": false, 00:18:20.529 "seek_data": false, 00:18:20.529 "copy": false, 00:18:20.529 "nvme_iov_md": false 00:18:20.529 }, 00:18:20.529 "memory_domains": [ 00:18:20.529 { 00:18:20.529 "dma_device_id": "system", 00:18:20.529 "dma_device_type": 1 00:18:20.529 }, 00:18:20.529 { 00:18:20.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.529 "dma_device_type": 2 00:18:20.529 }, 00:18:20.529 { 00:18:20.529 "dma_device_id": "system", 00:18:20.529 "dma_device_type": 1 00:18:20.529 }, 00:18:20.529 { 00:18:20.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.529 "dma_device_type": 2 00:18:20.529 } 00:18:20.529 ], 00:18:20.529 "driver_specific": { 00:18:20.529 "raid": { 00:18:20.529 "uuid": "dd3fc78d-e17c-4720-ada3-00c7fd757b6e", 00:18:20.529 "strip_size_kb": 0, 00:18:20.529 "state": "online", 00:18:20.529 "raid_level": "raid1", 00:18:20.529 "superblock": true, 00:18:20.529 "num_base_bdevs": 2, 00:18:20.529 "num_base_bdevs_discovered": 2, 00:18:20.529 "num_base_bdevs_operational": 2, 00:18:20.529 "base_bdevs_list": [ 00:18:20.529 { 00:18:20.529 "name": "pt1", 00:18:20.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.529 "is_configured": true, 00:18:20.529 "data_offset": 256, 00:18:20.529 "data_size": 7936 00:18:20.529 }, 00:18:20.529 { 00:18:20.529 "name": "pt2", 00:18:20.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.529 "is_configured": true, 00:18:20.529 "data_offset": 256, 00:18:20.529 "data_size": 7936 00:18:20.529 } 00:18:20.529 ] 00:18:20.529 } 00:18:20.529 } 00:18:20.529 }' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:20.529 pt2' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:20.529 [2024-12-07 17:34:53.821042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dd3fc78d-e17c-4720-ada3-00c7fd757b6e 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z dd3fc78d-e17c-4720-ada3-00c7fd757b6e ']' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.529 [2024-12-07 17:34:53.864662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.529 [2024-12-07 17:34:53.864706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.529 [2024-12-07 17:34:53.864804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.529 [2024-12-07 17:34:53.864865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.529 [2024-12-07 17:34:53.864887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:20.529 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.791 17:34:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.791 [2024-12-07 17:34:53.996465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:20.791 [2024-12-07 17:34:53.998667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:20.791 [2024-12-07 17:34:53.998765] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:20.791 [2024-12-07 17:34:53.998828] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:20.791 [2024-12-07 17:34:53.998852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.791 [2024-12-07 17:34:53.998866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:20.791 request: 00:18:20.791 { 00:18:20.791 "name": "raid_bdev1", 00:18:20.791 "raid_level": "raid1", 00:18:20.791 "base_bdevs": [ 00:18:20.791 "malloc1", 00:18:20.791 "malloc2" 00:18:20.791 ], 00:18:20.791 "superblock": false, 00:18:20.791 "method": "bdev_raid_create", 00:18:20.791 "req_id": 1 00:18:20.791 } 00:18:20.791 Got JSON-RPC error response 00:18:20.791 response: 00:18:20.791 { 00:18:20.791 "code": -17, 00:18:20.791 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:20.791 } 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.791 [2024-12-07 17:34:54.060329] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:20.791 [2024-12-07 17:34:54.060389] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.791 [2024-12-07 17:34:54.060408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:20.791 [2024-12-07 17:34:54.060422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.791 [2024-12-07 17:34:54.062609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.791 [2024-12-07 17:34:54.062651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:20.791 [2024-12-07 17:34:54.062709] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:20.791 [2024-12-07 17:34:54.062771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:20.791 pt1 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.791 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.791 "name": "raid_bdev1", 00:18:20.791 "uuid": "dd3fc78d-e17c-4720-ada3-00c7fd757b6e", 00:18:20.791 "strip_size_kb": 0, 00:18:20.791 "state": "configuring", 00:18:20.791 "raid_level": "raid1", 00:18:20.791 "superblock": true, 00:18:20.791 "num_base_bdevs": 2, 00:18:20.791 "num_base_bdevs_discovered": 1, 00:18:20.791 "num_base_bdevs_operational": 2, 00:18:20.791 "base_bdevs_list": [ 00:18:20.791 { 00:18:20.791 "name": "pt1", 00:18:20.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.791 "is_configured": true, 00:18:20.791 "data_offset": 256, 00:18:20.791 "data_size": 7936 00:18:20.791 }, 00:18:20.791 { 00:18:20.791 "name": null, 00:18:20.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.792 "is_configured": false, 00:18:20.792 "data_offset": 256, 00:18:20.792 "data_size": 7936 00:18:20.792 } 00:18:20.792 ] 00:18:20.792 }' 00:18:20.792 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.792 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.362 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:21.362 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:21.362 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:21.362 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:21.362 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.362 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.362 [2024-12-07 17:34:54.531666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:21.362 [2024-12-07 17:34:54.531763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.362 [2024-12-07 17:34:54.531791] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:21.362 [2024-12-07 17:34:54.531805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.362 [2024-12-07 17:34:54.532041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.362 [2024-12-07 17:34:54.532070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:21.362 [2024-12-07 17:34:54.532137] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:21.362 [2024-12-07 17:34:54.532168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.362 [2024-12-07 17:34:54.532277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:21.362 [2024-12-07 17:34:54.532298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:21.362 [2024-12-07 17:34:54.532394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:21.362 [2024-12-07 17:34:54.532476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:21.362 [2024-12-07 17:34:54.532490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:21.362 [2024-12-07 17:34:54.532569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.362 pt2 00:18:21.362 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.362 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:21.362 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.363 "name": "raid_bdev1", 00:18:21.363 "uuid": "dd3fc78d-e17c-4720-ada3-00c7fd757b6e", 00:18:21.363 "strip_size_kb": 0, 00:18:21.363 "state": "online", 00:18:21.363 "raid_level": "raid1", 00:18:21.363 "superblock": true, 00:18:21.363 "num_base_bdevs": 2, 00:18:21.363 "num_base_bdevs_discovered": 2, 00:18:21.363 "num_base_bdevs_operational": 2, 00:18:21.363 "base_bdevs_list": [ 00:18:21.363 { 00:18:21.363 "name": "pt1", 00:18:21.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.363 "is_configured": true, 00:18:21.363 "data_offset": 256, 00:18:21.363 "data_size": 7936 00:18:21.363 }, 00:18:21.363 { 00:18:21.363 "name": "pt2", 00:18:21.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.363 "is_configured": true, 00:18:21.363 "data_offset": 256, 00:18:21.363 "data_size": 7936 00:18:21.363 } 00:18:21.363 ] 00:18:21.363 }' 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.363 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.623 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:21.623 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:21.623 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:21.623 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:21.623 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:21.623 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:21.623 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:21.623 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.623 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.623 17:34:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.623 [2024-12-07 17:34:54.983154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.894 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.894 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:21.894 "name": "raid_bdev1", 00:18:21.894 "aliases": [ 00:18:21.894 "dd3fc78d-e17c-4720-ada3-00c7fd757b6e" 00:18:21.894 ], 00:18:21.894 "product_name": "Raid Volume", 00:18:21.894 "block_size": 4128, 00:18:21.895 "num_blocks": 7936, 00:18:21.895 "uuid": "dd3fc78d-e17c-4720-ada3-00c7fd757b6e", 00:18:21.895 "md_size": 32, 00:18:21.895 "md_interleave": true, 00:18:21.895 "dif_type": 0, 00:18:21.895 "assigned_rate_limits": { 00:18:21.895 "rw_ios_per_sec": 0, 00:18:21.895 "rw_mbytes_per_sec": 0, 00:18:21.895 "r_mbytes_per_sec": 0, 00:18:21.895 "w_mbytes_per_sec": 0 00:18:21.895 }, 00:18:21.895 "claimed": false, 00:18:21.895 "zoned": false, 00:18:21.895 "supported_io_types": { 00:18:21.895 "read": true, 00:18:21.895 "write": true, 00:18:21.895 "unmap": false, 00:18:21.895 "flush": false, 00:18:21.895 "reset": true, 00:18:21.895 "nvme_admin": false, 00:18:21.895 "nvme_io": false, 00:18:21.895 "nvme_io_md": false, 00:18:21.895 "write_zeroes": true, 00:18:21.895 "zcopy": false, 00:18:21.895 "get_zone_info": false, 00:18:21.895 "zone_management": false, 00:18:21.895 "zone_append": false, 00:18:21.895 "compare": false, 00:18:21.895 "compare_and_write": false, 00:18:21.895 "abort": false, 00:18:21.895 "seek_hole": false, 00:18:21.895 "seek_data": false, 00:18:21.895 "copy": false, 00:18:21.895 "nvme_iov_md": false 00:18:21.895 }, 00:18:21.895 "memory_domains": [ 00:18:21.895 { 00:18:21.895 "dma_device_id": "system", 00:18:21.895 "dma_device_type": 1 00:18:21.895 }, 00:18:21.895 { 00:18:21.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.895 "dma_device_type": 2 00:18:21.895 }, 00:18:21.895 { 00:18:21.895 "dma_device_id": "system", 00:18:21.895 "dma_device_type": 1 00:18:21.895 }, 00:18:21.895 { 00:18:21.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.895 "dma_device_type": 2 00:18:21.895 } 00:18:21.895 ], 00:18:21.895 "driver_specific": { 00:18:21.895 "raid": { 00:18:21.895 "uuid": "dd3fc78d-e17c-4720-ada3-00c7fd757b6e", 00:18:21.895 "strip_size_kb": 0, 00:18:21.895 "state": "online", 00:18:21.895 "raid_level": "raid1", 00:18:21.895 "superblock": true, 00:18:21.895 "num_base_bdevs": 2, 00:18:21.895 "num_base_bdevs_discovered": 2, 00:18:21.895 "num_base_bdevs_operational": 2, 00:18:21.895 "base_bdevs_list": [ 00:18:21.895 { 00:18:21.895 "name": "pt1", 00:18:21.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.895 "is_configured": true, 00:18:21.895 "data_offset": 256, 00:18:21.895 "data_size": 7936 00:18:21.895 }, 00:18:21.895 { 00:18:21.895 "name": "pt2", 00:18:21.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.895 "is_configured": true, 00:18:21.895 "data_offset": 256, 00:18:21.895 "data_size": 7936 00:18:21.895 } 00:18:21.895 ] 00:18:21.895 } 00:18:21.895 } 00:18:21.895 }' 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:21.895 pt2' 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:21.895 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.896 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.896 [2024-12-07 17:34:55.226723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.896 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.896 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' dd3fc78d-e17c-4720-ada3-00c7fd757b6e '!=' dd3fc78d-e17c-4720-ada3-00c7fd757b6e ']' 00:18:21.896 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:21.896 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:21.896 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:21.896 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:21.896 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.896 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.156 [2024-12-07 17:34:55.270446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.156 "name": "raid_bdev1", 00:18:22.156 "uuid": "dd3fc78d-e17c-4720-ada3-00c7fd757b6e", 00:18:22.156 "strip_size_kb": 0, 00:18:22.156 "state": "online", 00:18:22.156 "raid_level": "raid1", 00:18:22.156 "superblock": true, 00:18:22.156 "num_base_bdevs": 2, 00:18:22.156 "num_base_bdevs_discovered": 1, 00:18:22.156 "num_base_bdevs_operational": 1, 00:18:22.156 "base_bdevs_list": [ 00:18:22.156 { 00:18:22.156 "name": null, 00:18:22.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.156 "is_configured": false, 00:18:22.156 "data_offset": 0, 00:18:22.156 "data_size": 7936 00:18:22.156 }, 00:18:22.156 { 00:18:22.156 "name": "pt2", 00:18:22.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.156 "is_configured": true, 00:18:22.156 "data_offset": 256, 00:18:22.156 "data_size": 7936 00:18:22.156 } 00:18:22.156 ] 00:18:22.156 }' 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.156 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.415 [2024-12-07 17:34:55.693665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.415 [2024-12-07 17:34:55.693698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.415 [2024-12-07 17:34:55.693762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.415 [2024-12-07 17:34:55.693811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.415 [2024-12-07 17:34:55.693824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.415 [2024-12-07 17:34:55.765563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.415 [2024-12-07 17:34:55.765622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.415 [2024-12-07 17:34:55.765641] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:22.415 [2024-12-07 17:34:55.765655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.415 [2024-12-07 17:34:55.767765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.415 [2024-12-07 17:34:55.767811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.415 [2024-12-07 17:34:55.767867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:22.415 [2024-12-07 17:34:55.767923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.415 [2024-12-07 17:34:55.768006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:22.415 [2024-12-07 17:34:55.768021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:22.415 [2024-12-07 17:34:55.768120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:22.415 [2024-12-07 17:34:55.768200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:22.415 [2024-12-07 17:34:55.768209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:22.415 [2024-12-07 17:34:55.768269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.415 pt2 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.415 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.673 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.673 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.673 "name": "raid_bdev1", 00:18:22.673 "uuid": "dd3fc78d-e17c-4720-ada3-00c7fd757b6e", 00:18:22.673 "strip_size_kb": 0, 00:18:22.673 "state": "online", 00:18:22.673 "raid_level": "raid1", 00:18:22.673 "superblock": true, 00:18:22.673 "num_base_bdevs": 2, 00:18:22.673 "num_base_bdevs_discovered": 1, 00:18:22.673 "num_base_bdevs_operational": 1, 00:18:22.673 "base_bdevs_list": [ 00:18:22.673 { 00:18:22.673 "name": null, 00:18:22.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.673 "is_configured": false, 00:18:22.673 "data_offset": 256, 00:18:22.673 "data_size": 7936 00:18:22.673 }, 00:18:22.673 { 00:18:22.673 "name": "pt2", 00:18:22.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.673 "is_configured": true, 00:18:22.673 "data_offset": 256, 00:18:22.673 "data_size": 7936 00:18:22.673 } 00:18:22.673 ] 00:18:22.673 }' 00:18:22.673 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.673 17:34:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.932 [2024-12-07 17:34:56.248707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.932 [2024-12-07 17:34:56.248741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.932 [2024-12-07 17:34:56.248800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.932 [2024-12-07 17:34:56.248848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.932 [2024-12-07 17:34:56.248858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.932 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.932 [2024-12-07 17:34:56.308635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:22.932 [2024-12-07 17:34:56.308691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.932 [2024-12-07 17:34:56.308712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:22.932 [2024-12-07 17:34:56.308723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.932 [2024-12-07 17:34:56.310800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.932 [2024-12-07 17:34:56.310838] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:22.932 [2024-12-07 17:34:56.310891] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:22.932 [2024-12-07 17:34:56.310963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:22.932 [2024-12-07 17:34:56.311067] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:22.932 [2024-12-07 17:34:56.311085] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.932 [2024-12-07 17:34:56.311102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:22.932 [2024-12-07 17:34:56.311159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.932 [2024-12-07 17:34:56.311239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:22.932 [2024-12-07 17:34:56.311260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:22.932 [2024-12-07 17:34:56.311334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:22.932 [2024-12-07 17:34:56.311403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:22.933 [2024-12-07 17:34:56.311418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:22.933 [2024-12-07 17:34:56.311487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.192 pt1 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.192 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.192 "name": "raid_bdev1", 00:18:23.192 "uuid": "dd3fc78d-e17c-4720-ada3-00c7fd757b6e", 00:18:23.192 "strip_size_kb": 0, 00:18:23.192 "state": "online", 00:18:23.192 "raid_level": "raid1", 00:18:23.192 "superblock": true, 00:18:23.192 "num_base_bdevs": 2, 00:18:23.192 "num_base_bdevs_discovered": 1, 00:18:23.192 "num_base_bdevs_operational": 1, 00:18:23.192 "base_bdevs_list": [ 00:18:23.192 { 00:18:23.192 "name": null, 00:18:23.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.192 "is_configured": false, 00:18:23.192 "data_offset": 256, 00:18:23.192 "data_size": 7936 00:18:23.192 }, 00:18:23.192 { 00:18:23.192 "name": "pt2", 00:18:23.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.192 "is_configured": true, 00:18:23.193 "data_offset": 256, 00:18:23.193 "data_size": 7936 00:18:23.193 } 00:18:23.193 ] 00:18:23.193 }' 00:18:23.193 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.193 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.452 [2024-12-07 17:34:56.803980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.452 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' dd3fc78d-e17c-4720-ada3-00c7fd757b6e '!=' dd3fc78d-e17c-4720-ada3-00c7fd757b6e ']' 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88688 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88688 ']' 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88688 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88688 00:18:23.712 killing process with pid 88688 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88688' 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88688 00:18:23.712 [2024-12-07 17:34:56.887151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:23.712 [2024-12-07 17:34:56.887217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.712 [2024-12-07 17:34:56.887255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.712 [2024-12-07 17:34:56.887270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:23.712 17:34:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88688 00:18:23.972 [2024-12-07 17:34:57.100301] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.353 ************************************ 00:18:25.354 17:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:25.354 00:18:25.354 real 0m6.223s 00:18:25.354 user 0m9.231s 00:18:25.354 sys 0m1.237s 00:18:25.354 17:34:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.354 17:34:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.354 END TEST raid_superblock_test_md_interleaved 00:18:25.354 ************************************ 00:18:25.354 17:34:58 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:25.354 17:34:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:25.354 17:34:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.354 17:34:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.354 ************************************ 00:18:25.354 START TEST raid_rebuild_test_sb_md_interleaved 00:18:25.354 ************************************ 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89012 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89012 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89012 ']' 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.354 17:34:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.354 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:25.354 Zero copy mechanism will not be used. 00:18:25.354 [2024-12-07 17:34:58.495726] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:25.354 [2024-12-07 17:34:58.495840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89012 ] 00:18:25.354 [2024-12-07 17:34:58.669601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.614 [2024-12-07 17:34:58.808767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.874 [2024-12-07 17:34:59.035792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.874 [2024-12-07 17:34:59.035869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.134 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.134 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:26.134 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:26.134 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:26.134 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.134 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.135 BaseBdev1_malloc 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.135 [2024-12-07 17:34:59.373543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:26.135 [2024-12-07 17:34:59.373625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.135 [2024-12-07 17:34:59.373651] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:26.135 [2024-12-07 17:34:59.373666] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.135 [2024-12-07 17:34:59.375797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.135 [2024-12-07 17:34:59.375843] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:26.135 BaseBdev1 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.135 BaseBdev2_malloc 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.135 [2024-12-07 17:34:59.435539] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:26.135 [2024-12-07 17:34:59.435615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.135 [2024-12-07 17:34:59.435641] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:26.135 [2024-12-07 17:34:59.435660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.135 [2024-12-07 17:34:59.437805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.135 [2024-12-07 17:34:59.437847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:26.135 BaseBdev2 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.135 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.395 spare_malloc 00:18:26.395 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.395 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:26.395 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.395 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.395 spare_delay 00:18:26.395 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.395 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:26.395 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.396 [2024-12-07 17:34:59.541669] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:26.396 [2024-12-07 17:34:59.541744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.396 [2024-12-07 17:34:59.541767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:26.396 [2024-12-07 17:34:59.541781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.396 [2024-12-07 17:34:59.543840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.396 [2024-12-07 17:34:59.543886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:26.396 spare 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.396 [2024-12-07 17:34:59.553700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.396 [2024-12-07 17:34:59.555835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.396 [2024-12-07 17:34:59.556075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:26.396 [2024-12-07 17:34:59.556094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:26.396 [2024-12-07 17:34:59.556179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:26.396 [2024-12-07 17:34:59.556270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:26.396 [2024-12-07 17:34:59.556280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:26.396 [2024-12-07 17:34:59.556357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.396 "name": "raid_bdev1", 00:18:26.396 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:26.396 "strip_size_kb": 0, 00:18:26.396 "state": "online", 00:18:26.396 "raid_level": "raid1", 00:18:26.396 "superblock": true, 00:18:26.396 "num_base_bdevs": 2, 00:18:26.396 "num_base_bdevs_discovered": 2, 00:18:26.396 "num_base_bdevs_operational": 2, 00:18:26.396 "base_bdevs_list": [ 00:18:26.396 { 00:18:26.396 "name": "BaseBdev1", 00:18:26.396 "uuid": "204f454a-eb56-5c58-9ca2-9726b86d8d84", 00:18:26.396 "is_configured": true, 00:18:26.396 "data_offset": 256, 00:18:26.396 "data_size": 7936 00:18:26.396 }, 00:18:26.396 { 00:18:26.396 "name": "BaseBdev2", 00:18:26.396 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:26.396 "is_configured": true, 00:18:26.396 "data_offset": 256, 00:18:26.396 "data_size": 7936 00:18:26.396 } 00:18:26.396 ] 00:18:26.396 }' 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.396 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.656 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.656 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.656 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.656 17:34:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:26.656 [2024-12-07 17:34:59.989324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.656 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.656 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:26.656 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:26.656 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.656 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.656 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.917 [2024-12-07 17:35:00.060814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.917 "name": "raid_bdev1", 00:18:26.917 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:26.917 "strip_size_kb": 0, 00:18:26.917 "state": "online", 00:18:26.917 "raid_level": "raid1", 00:18:26.917 "superblock": true, 00:18:26.917 "num_base_bdevs": 2, 00:18:26.917 "num_base_bdevs_discovered": 1, 00:18:26.917 "num_base_bdevs_operational": 1, 00:18:26.917 "base_bdevs_list": [ 00:18:26.917 { 00:18:26.917 "name": null, 00:18:26.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.917 "is_configured": false, 00:18:26.917 "data_offset": 0, 00:18:26.917 "data_size": 7936 00:18:26.917 }, 00:18:26.917 { 00:18:26.917 "name": "BaseBdev2", 00:18:26.917 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:26.917 "is_configured": true, 00:18:26.917 "data_offset": 256, 00:18:26.917 "data_size": 7936 00:18:26.917 } 00:18:26.917 ] 00:18:26.917 }' 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.917 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.178 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:27.178 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.178 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.178 [2024-12-07 17:35:00.512133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.178 [2024-12-07 17:35:00.532865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:27.178 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.178 17:35:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:27.178 [2024-12-07 17:35:00.535049] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.561 "name": "raid_bdev1", 00:18:28.561 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:28.561 "strip_size_kb": 0, 00:18:28.561 "state": "online", 00:18:28.561 "raid_level": "raid1", 00:18:28.561 "superblock": true, 00:18:28.561 "num_base_bdevs": 2, 00:18:28.561 "num_base_bdevs_discovered": 2, 00:18:28.561 "num_base_bdevs_operational": 2, 00:18:28.561 "process": { 00:18:28.561 "type": "rebuild", 00:18:28.561 "target": "spare", 00:18:28.561 "progress": { 00:18:28.561 "blocks": 2560, 00:18:28.561 "percent": 32 00:18:28.561 } 00:18:28.561 }, 00:18:28.561 "base_bdevs_list": [ 00:18:28.561 { 00:18:28.561 "name": "spare", 00:18:28.561 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:28.561 "is_configured": true, 00:18:28.561 "data_offset": 256, 00:18:28.561 "data_size": 7936 00:18:28.561 }, 00:18:28.561 { 00:18:28.561 "name": "BaseBdev2", 00:18:28.561 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:28.561 "is_configured": true, 00:18:28.561 "data_offset": 256, 00:18:28.561 "data_size": 7936 00:18:28.561 } 00:18:28.561 ] 00:18:28.561 }' 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.561 [2024-12-07 17:35:01.691115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.561 [2024-12-07 17:35:01.744780] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:28.561 [2024-12-07 17:35:01.744854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.561 [2024-12-07 17:35:01.744873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.561 [2024-12-07 17:35:01.744890] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.561 "name": "raid_bdev1", 00:18:28.561 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:28.561 "strip_size_kb": 0, 00:18:28.561 "state": "online", 00:18:28.561 "raid_level": "raid1", 00:18:28.561 "superblock": true, 00:18:28.561 "num_base_bdevs": 2, 00:18:28.561 "num_base_bdevs_discovered": 1, 00:18:28.561 "num_base_bdevs_operational": 1, 00:18:28.561 "base_bdevs_list": [ 00:18:28.561 { 00:18:28.561 "name": null, 00:18:28.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.561 "is_configured": false, 00:18:28.561 "data_offset": 0, 00:18:28.561 "data_size": 7936 00:18:28.561 }, 00:18:28.561 { 00:18:28.561 "name": "BaseBdev2", 00:18:28.561 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:28.561 "is_configured": true, 00:18:28.561 "data_offset": 256, 00:18:28.561 "data_size": 7936 00:18:28.561 } 00:18:28.561 ] 00:18:28.561 }' 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.561 17:35:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.131 "name": "raid_bdev1", 00:18:29.131 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:29.131 "strip_size_kb": 0, 00:18:29.131 "state": "online", 00:18:29.131 "raid_level": "raid1", 00:18:29.131 "superblock": true, 00:18:29.131 "num_base_bdevs": 2, 00:18:29.131 "num_base_bdevs_discovered": 1, 00:18:29.131 "num_base_bdevs_operational": 1, 00:18:29.131 "base_bdevs_list": [ 00:18:29.131 { 00:18:29.131 "name": null, 00:18:29.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.131 "is_configured": false, 00:18:29.131 "data_offset": 0, 00:18:29.131 "data_size": 7936 00:18:29.131 }, 00:18:29.131 { 00:18:29.131 "name": "BaseBdev2", 00:18:29.131 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:29.131 "is_configured": true, 00:18:29.131 "data_offset": 256, 00:18:29.131 "data_size": 7936 00:18:29.131 } 00:18:29.131 ] 00:18:29.131 }' 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.131 [2024-12-07 17:35:02.383063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.131 [2024-12-07 17:35:02.401605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:29.131 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.132 17:35:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:29.132 [2024-12-07 17:35:02.403796] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.072 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.072 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.073 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.073 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.073 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.073 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.073 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.073 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.073 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.073 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.355 "name": "raid_bdev1", 00:18:30.355 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:30.355 "strip_size_kb": 0, 00:18:30.355 "state": "online", 00:18:30.355 "raid_level": "raid1", 00:18:30.355 "superblock": true, 00:18:30.355 "num_base_bdevs": 2, 00:18:30.355 "num_base_bdevs_discovered": 2, 00:18:30.355 "num_base_bdevs_operational": 2, 00:18:30.355 "process": { 00:18:30.355 "type": "rebuild", 00:18:30.355 "target": "spare", 00:18:30.355 "progress": { 00:18:30.355 "blocks": 2560, 00:18:30.355 "percent": 32 00:18:30.355 } 00:18:30.355 }, 00:18:30.355 "base_bdevs_list": [ 00:18:30.355 { 00:18:30.355 "name": "spare", 00:18:30.355 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:30.355 "is_configured": true, 00:18:30.355 "data_offset": 256, 00:18:30.355 "data_size": 7936 00:18:30.355 }, 00:18:30.355 { 00:18:30.355 "name": "BaseBdev2", 00:18:30.355 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:30.355 "is_configured": true, 00:18:30.355 "data_offset": 256, 00:18:30.355 "data_size": 7936 00:18:30.355 } 00:18:30.355 ] 00:18:30.355 }' 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:30.355 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=734 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.355 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.356 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.356 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.356 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.356 "name": "raid_bdev1", 00:18:30.356 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:30.356 "strip_size_kb": 0, 00:18:30.356 "state": "online", 00:18:30.356 "raid_level": "raid1", 00:18:30.356 "superblock": true, 00:18:30.356 "num_base_bdevs": 2, 00:18:30.356 "num_base_bdevs_discovered": 2, 00:18:30.356 "num_base_bdevs_operational": 2, 00:18:30.356 "process": { 00:18:30.356 "type": "rebuild", 00:18:30.356 "target": "spare", 00:18:30.356 "progress": { 00:18:30.356 "blocks": 2816, 00:18:30.356 "percent": 35 00:18:30.356 } 00:18:30.356 }, 00:18:30.356 "base_bdevs_list": [ 00:18:30.356 { 00:18:30.356 "name": "spare", 00:18:30.356 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:30.356 "is_configured": true, 00:18:30.356 "data_offset": 256, 00:18:30.356 "data_size": 7936 00:18:30.356 }, 00:18:30.356 { 00:18:30.356 "name": "BaseBdev2", 00:18:30.356 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:30.356 "is_configured": true, 00:18:30.356 "data_offset": 256, 00:18:30.356 "data_size": 7936 00:18:30.356 } 00:18:30.356 ] 00:18:30.356 }' 00:18:30.356 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.356 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.356 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.356 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.356 17:35:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:31.296 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.297 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.297 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.297 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.297 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.297 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.557 "name": "raid_bdev1", 00:18:31.557 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:31.557 "strip_size_kb": 0, 00:18:31.557 "state": "online", 00:18:31.557 "raid_level": "raid1", 00:18:31.557 "superblock": true, 00:18:31.557 "num_base_bdevs": 2, 00:18:31.557 "num_base_bdevs_discovered": 2, 00:18:31.557 "num_base_bdevs_operational": 2, 00:18:31.557 "process": { 00:18:31.557 "type": "rebuild", 00:18:31.557 "target": "spare", 00:18:31.557 "progress": { 00:18:31.557 "blocks": 5632, 00:18:31.557 "percent": 70 00:18:31.557 } 00:18:31.557 }, 00:18:31.557 "base_bdevs_list": [ 00:18:31.557 { 00:18:31.557 "name": "spare", 00:18:31.557 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:31.557 "is_configured": true, 00:18:31.557 "data_offset": 256, 00:18:31.557 "data_size": 7936 00:18:31.557 }, 00:18:31.557 { 00:18:31.557 "name": "BaseBdev2", 00:18:31.557 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:31.557 "is_configured": true, 00:18:31.557 "data_offset": 256, 00:18:31.557 "data_size": 7936 00:18:31.557 } 00:18:31.557 ] 00:18:31.557 }' 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.557 17:35:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:32.502 [2024-12-07 17:35:05.527558] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:32.502 [2024-12-07 17:35:05.527667] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:32.502 [2024-12-07 17:35:05.527805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.502 "name": "raid_bdev1", 00:18:32.502 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:32.502 "strip_size_kb": 0, 00:18:32.502 "state": "online", 00:18:32.502 "raid_level": "raid1", 00:18:32.502 "superblock": true, 00:18:32.502 "num_base_bdevs": 2, 00:18:32.502 "num_base_bdevs_discovered": 2, 00:18:32.502 "num_base_bdevs_operational": 2, 00:18:32.502 "base_bdevs_list": [ 00:18:32.502 { 00:18:32.502 "name": "spare", 00:18:32.502 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:32.502 "is_configured": true, 00:18:32.502 "data_offset": 256, 00:18:32.502 "data_size": 7936 00:18:32.502 }, 00:18:32.502 { 00:18:32.502 "name": "BaseBdev2", 00:18:32.502 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:32.502 "is_configured": true, 00:18:32.502 "data_offset": 256, 00:18:32.502 "data_size": 7936 00:18:32.502 } 00:18:32.502 ] 00:18:32.502 }' 00:18:32.502 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.777 "name": "raid_bdev1", 00:18:32.777 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:32.777 "strip_size_kb": 0, 00:18:32.777 "state": "online", 00:18:32.777 "raid_level": "raid1", 00:18:32.777 "superblock": true, 00:18:32.777 "num_base_bdevs": 2, 00:18:32.777 "num_base_bdevs_discovered": 2, 00:18:32.777 "num_base_bdevs_operational": 2, 00:18:32.777 "base_bdevs_list": [ 00:18:32.777 { 00:18:32.777 "name": "spare", 00:18:32.777 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:32.777 "is_configured": true, 00:18:32.777 "data_offset": 256, 00:18:32.777 "data_size": 7936 00:18:32.777 }, 00:18:32.777 { 00:18:32.777 "name": "BaseBdev2", 00:18:32.777 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:32.777 "is_configured": true, 00:18:32.777 "data_offset": 256, 00:18:32.777 "data_size": 7936 00:18:32.777 } 00:18:32.777 ] 00:18:32.777 }' 00:18:32.777 17:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.777 "name": "raid_bdev1", 00:18:32.777 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:32.777 "strip_size_kb": 0, 00:18:32.777 "state": "online", 00:18:32.777 "raid_level": "raid1", 00:18:32.777 "superblock": true, 00:18:32.777 "num_base_bdevs": 2, 00:18:32.777 "num_base_bdevs_discovered": 2, 00:18:32.777 "num_base_bdevs_operational": 2, 00:18:32.777 "base_bdevs_list": [ 00:18:32.777 { 00:18:32.777 "name": "spare", 00:18:32.777 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:32.777 "is_configured": true, 00:18:32.777 "data_offset": 256, 00:18:32.777 "data_size": 7936 00:18:32.777 }, 00:18:32.777 { 00:18:32.777 "name": "BaseBdev2", 00:18:32.777 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:32.777 "is_configured": true, 00:18:32.777 "data_offset": 256, 00:18:32.777 "data_size": 7936 00:18:32.777 } 00:18:32.777 ] 00:18:32.777 }' 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.777 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.369 [2024-12-07 17:35:06.497614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.369 [2024-12-07 17:35:06.497666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.369 [2024-12-07 17:35:06.497780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.369 [2024-12-07 17:35:06.497867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.369 [2024-12-07 17:35:06.497883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:33.369 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.370 [2024-12-07 17:35:06.569482] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:33.370 [2024-12-07 17:35:06.569554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.370 [2024-12-07 17:35:06.569586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:33.370 [2024-12-07 17:35:06.569600] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.370 [2024-12-07 17:35:06.571697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.370 [2024-12-07 17:35:06.571737] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:33.370 [2024-12-07 17:35:06.571809] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:33.370 [2024-12-07 17:35:06.571880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:33.370 [2024-12-07 17:35:06.572037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.370 spare 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.370 [2024-12-07 17:35:06.671960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:33.370 [2024-12-07 17:35:06.672007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:33.370 [2024-12-07 17:35:06.672146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:33.370 [2024-12-07 17:35:06.672268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:33.370 [2024-12-07 17:35:06.672286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:33.370 [2024-12-07 17:35:06.672401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.370 "name": "raid_bdev1", 00:18:33.370 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:33.370 "strip_size_kb": 0, 00:18:33.370 "state": "online", 00:18:33.370 "raid_level": "raid1", 00:18:33.370 "superblock": true, 00:18:33.370 "num_base_bdevs": 2, 00:18:33.370 "num_base_bdevs_discovered": 2, 00:18:33.370 "num_base_bdevs_operational": 2, 00:18:33.370 "base_bdevs_list": [ 00:18:33.370 { 00:18:33.370 "name": "spare", 00:18:33.370 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:33.370 "is_configured": true, 00:18:33.370 "data_offset": 256, 00:18:33.370 "data_size": 7936 00:18:33.370 }, 00:18:33.370 { 00:18:33.370 "name": "BaseBdev2", 00:18:33.370 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:33.370 "is_configured": true, 00:18:33.370 "data_offset": 256, 00:18:33.370 "data_size": 7936 00:18:33.370 } 00:18:33.370 ] 00:18:33.370 }' 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.370 17:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.940 "name": "raid_bdev1", 00:18:33.940 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:33.940 "strip_size_kb": 0, 00:18:33.940 "state": "online", 00:18:33.940 "raid_level": "raid1", 00:18:33.940 "superblock": true, 00:18:33.940 "num_base_bdevs": 2, 00:18:33.940 "num_base_bdevs_discovered": 2, 00:18:33.940 "num_base_bdevs_operational": 2, 00:18:33.940 "base_bdevs_list": [ 00:18:33.940 { 00:18:33.940 "name": "spare", 00:18:33.940 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:33.940 "is_configured": true, 00:18:33.940 "data_offset": 256, 00:18:33.940 "data_size": 7936 00:18:33.940 }, 00:18:33.940 { 00:18:33.940 "name": "BaseBdev2", 00:18:33.940 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:33.940 "is_configured": true, 00:18:33.940 "data_offset": 256, 00:18:33.940 "data_size": 7936 00:18:33.940 } 00:18:33.940 ] 00:18:33.940 }' 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:33.940 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 [2024-12-07 17:35:07.344243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.199 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.199 "name": "raid_bdev1", 00:18:34.199 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:34.199 "strip_size_kb": 0, 00:18:34.199 "state": "online", 00:18:34.199 "raid_level": "raid1", 00:18:34.199 "superblock": true, 00:18:34.199 "num_base_bdevs": 2, 00:18:34.199 "num_base_bdevs_discovered": 1, 00:18:34.199 "num_base_bdevs_operational": 1, 00:18:34.199 "base_bdevs_list": [ 00:18:34.199 { 00:18:34.199 "name": null, 00:18:34.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.200 "is_configured": false, 00:18:34.200 "data_offset": 0, 00:18:34.200 "data_size": 7936 00:18:34.200 }, 00:18:34.200 { 00:18:34.200 "name": "BaseBdev2", 00:18:34.200 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:34.200 "is_configured": true, 00:18:34.200 "data_offset": 256, 00:18:34.200 "data_size": 7936 00:18:34.200 } 00:18:34.200 ] 00:18:34.200 }' 00:18:34.200 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.200 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.459 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:34.459 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.459 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.459 [2024-12-07 17:35:07.807633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.459 [2024-12-07 17:35:07.807849] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.459 [2024-12-07 17:35:07.807873] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:34.459 [2024-12-07 17:35:07.807961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.459 [2024-12-07 17:35:07.825138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:34.459 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.459 17:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:34.459 [2024-12-07 17:35:07.827254] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.840 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.840 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.840 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.840 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.840 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.840 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.840 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.841 "name": "raid_bdev1", 00:18:35.841 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:35.841 "strip_size_kb": 0, 00:18:35.841 "state": "online", 00:18:35.841 "raid_level": "raid1", 00:18:35.841 "superblock": true, 00:18:35.841 "num_base_bdevs": 2, 00:18:35.841 "num_base_bdevs_discovered": 2, 00:18:35.841 "num_base_bdevs_operational": 2, 00:18:35.841 "process": { 00:18:35.841 "type": "rebuild", 00:18:35.841 "target": "spare", 00:18:35.841 "progress": { 00:18:35.841 "blocks": 2560, 00:18:35.841 "percent": 32 00:18:35.841 } 00:18:35.841 }, 00:18:35.841 "base_bdevs_list": [ 00:18:35.841 { 00:18:35.841 "name": "spare", 00:18:35.841 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:35.841 "is_configured": true, 00:18:35.841 "data_offset": 256, 00:18:35.841 "data_size": 7936 00:18:35.841 }, 00:18:35.841 { 00:18:35.841 "name": "BaseBdev2", 00:18:35.841 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:35.841 "is_configured": true, 00:18:35.841 "data_offset": 256, 00:18:35.841 "data_size": 7936 00:18:35.841 } 00:18:35.841 ] 00:18:35.841 }' 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.841 17:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.841 [2024-12-07 17:35:08.970562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.841 [2024-12-07 17:35:09.036187] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:35.841 [2024-12-07 17:35:09.036264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.841 [2024-12-07 17:35:09.036282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.841 [2024-12-07 17:35:09.036294] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.841 "name": "raid_bdev1", 00:18:35.841 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:35.841 "strip_size_kb": 0, 00:18:35.841 "state": "online", 00:18:35.841 "raid_level": "raid1", 00:18:35.841 "superblock": true, 00:18:35.841 "num_base_bdevs": 2, 00:18:35.841 "num_base_bdevs_discovered": 1, 00:18:35.841 "num_base_bdevs_operational": 1, 00:18:35.841 "base_bdevs_list": [ 00:18:35.841 { 00:18:35.841 "name": null, 00:18:35.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.841 "is_configured": false, 00:18:35.841 "data_offset": 0, 00:18:35.841 "data_size": 7936 00:18:35.841 }, 00:18:35.841 { 00:18:35.841 "name": "BaseBdev2", 00:18:35.841 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:35.841 "is_configured": true, 00:18:35.841 "data_offset": 256, 00:18:35.841 "data_size": 7936 00:18:35.841 } 00:18:35.841 ] 00:18:35.841 }' 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.841 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.411 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:36.411 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.411 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.411 [2024-12-07 17:35:09.499186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:36.411 [2024-12-07 17:35:09.499266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.411 [2024-12-07 17:35:09.499296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:36.411 [2024-12-07 17:35:09.499311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.411 [2024-12-07 17:35:09.499532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.411 [2024-12-07 17:35:09.499561] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:36.411 [2024-12-07 17:35:09.499641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:36.411 [2024-12-07 17:35:09.499658] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:36.411 [2024-12-07 17:35:09.499669] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:36.411 [2024-12-07 17:35:09.499700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.411 [2024-12-07 17:35:09.515692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:36.411 spare 00:18:36.411 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.411 17:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:36.411 [2024-12-07 17:35:09.517805] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.352 "name": "raid_bdev1", 00:18:37.352 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:37.352 "strip_size_kb": 0, 00:18:37.352 "state": "online", 00:18:37.352 "raid_level": "raid1", 00:18:37.352 "superblock": true, 00:18:37.352 "num_base_bdevs": 2, 00:18:37.352 "num_base_bdevs_discovered": 2, 00:18:37.352 "num_base_bdevs_operational": 2, 00:18:37.352 "process": { 00:18:37.352 "type": "rebuild", 00:18:37.352 "target": "spare", 00:18:37.352 "progress": { 00:18:37.352 "blocks": 2560, 00:18:37.352 "percent": 32 00:18:37.352 } 00:18:37.352 }, 00:18:37.352 "base_bdevs_list": [ 00:18:37.352 { 00:18:37.352 "name": "spare", 00:18:37.352 "uuid": "603c449f-857a-50ea-93bb-10deae71dbe7", 00:18:37.352 "is_configured": true, 00:18:37.352 "data_offset": 256, 00:18:37.352 "data_size": 7936 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "name": "BaseBdev2", 00:18:37.352 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:37.352 "is_configured": true, 00:18:37.352 "data_offset": 256, 00:18:37.352 "data_size": 7936 00:18:37.352 } 00:18:37.352 ] 00:18:37.352 }' 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.352 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.352 [2024-12-07 17:35:10.669701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.352 [2024-12-07 17:35:10.726409] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:37.352 [2024-12-07 17:35:10.726474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.352 [2024-12-07 17:35:10.726496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.352 [2024-12-07 17:35:10.726504] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:37.612 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.612 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.612 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.612 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.612 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.612 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.612 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.612 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.612 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.612 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.613 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.613 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.613 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.613 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.613 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.613 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.613 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.613 "name": "raid_bdev1", 00:18:37.613 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:37.613 "strip_size_kb": 0, 00:18:37.613 "state": "online", 00:18:37.613 "raid_level": "raid1", 00:18:37.613 "superblock": true, 00:18:37.613 "num_base_bdevs": 2, 00:18:37.613 "num_base_bdevs_discovered": 1, 00:18:37.613 "num_base_bdevs_operational": 1, 00:18:37.613 "base_bdevs_list": [ 00:18:37.613 { 00:18:37.613 "name": null, 00:18:37.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.613 "is_configured": false, 00:18:37.613 "data_offset": 0, 00:18:37.613 "data_size": 7936 00:18:37.613 }, 00:18:37.613 { 00:18:37.613 "name": "BaseBdev2", 00:18:37.613 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:37.613 "is_configured": true, 00:18:37.613 "data_offset": 256, 00:18:37.613 "data_size": 7936 00:18:37.613 } 00:18:37.613 ] 00:18:37.613 }' 00:18:37.613 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.613 17:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.872 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.872 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.872 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.872 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.872 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.872 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.872 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.872 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.872 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.872 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.132 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.132 "name": "raid_bdev1", 00:18:38.132 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:38.132 "strip_size_kb": 0, 00:18:38.132 "state": "online", 00:18:38.132 "raid_level": "raid1", 00:18:38.132 "superblock": true, 00:18:38.132 "num_base_bdevs": 2, 00:18:38.132 "num_base_bdevs_discovered": 1, 00:18:38.132 "num_base_bdevs_operational": 1, 00:18:38.132 "base_bdevs_list": [ 00:18:38.132 { 00:18:38.132 "name": null, 00:18:38.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.132 "is_configured": false, 00:18:38.132 "data_offset": 0, 00:18:38.132 "data_size": 7936 00:18:38.132 }, 00:18:38.132 { 00:18:38.132 "name": "BaseBdev2", 00:18:38.132 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:38.132 "is_configured": true, 00:18:38.132 "data_offset": 256, 00:18:38.132 "data_size": 7936 00:18:38.132 } 00:18:38.132 ] 00:18:38.132 }' 00:18:38.132 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.132 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.133 [2024-12-07 17:35:11.365359] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:38.133 [2024-12-07 17:35:11.365427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.133 [2024-12-07 17:35:11.365454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:38.133 [2024-12-07 17:35:11.365466] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.133 [2024-12-07 17:35:11.365684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.133 [2024-12-07 17:35:11.365707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:38.133 [2024-12-07 17:35:11.365766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:38.133 [2024-12-07 17:35:11.365783] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:38.133 [2024-12-07 17:35:11.365795] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:38.133 [2024-12-07 17:35:11.365824] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:38.133 BaseBdev1 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.133 17:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.073 "name": "raid_bdev1", 00:18:39.073 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:39.073 "strip_size_kb": 0, 00:18:39.073 "state": "online", 00:18:39.073 "raid_level": "raid1", 00:18:39.073 "superblock": true, 00:18:39.073 "num_base_bdevs": 2, 00:18:39.073 "num_base_bdevs_discovered": 1, 00:18:39.073 "num_base_bdevs_operational": 1, 00:18:39.073 "base_bdevs_list": [ 00:18:39.073 { 00:18:39.073 "name": null, 00:18:39.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.073 "is_configured": false, 00:18:39.073 "data_offset": 0, 00:18:39.073 "data_size": 7936 00:18:39.073 }, 00:18:39.073 { 00:18:39.073 "name": "BaseBdev2", 00:18:39.073 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:39.073 "is_configured": true, 00:18:39.073 "data_offset": 256, 00:18:39.073 "data_size": 7936 00:18:39.073 } 00:18:39.073 ] 00:18:39.073 }' 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.073 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.642 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.642 "name": "raid_bdev1", 00:18:39.642 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:39.642 "strip_size_kb": 0, 00:18:39.642 "state": "online", 00:18:39.642 "raid_level": "raid1", 00:18:39.642 "superblock": true, 00:18:39.642 "num_base_bdevs": 2, 00:18:39.642 "num_base_bdevs_discovered": 1, 00:18:39.642 "num_base_bdevs_operational": 1, 00:18:39.642 "base_bdevs_list": [ 00:18:39.642 { 00:18:39.642 "name": null, 00:18:39.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.642 "is_configured": false, 00:18:39.642 "data_offset": 0, 00:18:39.642 "data_size": 7936 00:18:39.642 }, 00:18:39.642 { 00:18:39.642 "name": "BaseBdev2", 00:18:39.642 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:39.642 "is_configured": true, 00:18:39.642 "data_offset": 256, 00:18:39.642 "data_size": 7936 00:18:39.642 } 00:18:39.642 ] 00:18:39.642 }' 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.643 [2024-12-07 17:35:12.931088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.643 [2024-12-07 17:35:12.931272] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:39.643 [2024-12-07 17:35:12.931292] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:39.643 request: 00:18:39.643 { 00:18:39.643 "base_bdev": "BaseBdev1", 00:18:39.643 "raid_bdev": "raid_bdev1", 00:18:39.643 "method": "bdev_raid_add_base_bdev", 00:18:39.643 "req_id": 1 00:18:39.643 } 00:18:39.643 Got JSON-RPC error response 00:18:39.643 response: 00:18:39.643 { 00:18:39.643 "code": -22, 00:18:39.643 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:39.643 } 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.643 17:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.581 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.841 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.841 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.841 "name": "raid_bdev1", 00:18:40.841 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:40.841 "strip_size_kb": 0, 00:18:40.841 "state": "online", 00:18:40.841 "raid_level": "raid1", 00:18:40.841 "superblock": true, 00:18:40.841 "num_base_bdevs": 2, 00:18:40.841 "num_base_bdevs_discovered": 1, 00:18:40.841 "num_base_bdevs_operational": 1, 00:18:40.841 "base_bdevs_list": [ 00:18:40.841 { 00:18:40.841 "name": null, 00:18:40.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.841 "is_configured": false, 00:18:40.841 "data_offset": 0, 00:18:40.841 "data_size": 7936 00:18:40.841 }, 00:18:40.841 { 00:18:40.841 "name": "BaseBdev2", 00:18:40.841 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:40.841 "is_configured": true, 00:18:40.841 "data_offset": 256, 00:18:40.841 "data_size": 7936 00:18:40.841 } 00:18:40.841 ] 00:18:40.841 }' 00:18:40.841 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.841 17:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.100 "name": "raid_bdev1", 00:18:41.100 "uuid": "b57bad35-66c9-4fb1-a9a3-6a39d7838ced", 00:18:41.100 "strip_size_kb": 0, 00:18:41.100 "state": "online", 00:18:41.100 "raid_level": "raid1", 00:18:41.100 "superblock": true, 00:18:41.100 "num_base_bdevs": 2, 00:18:41.100 "num_base_bdevs_discovered": 1, 00:18:41.100 "num_base_bdevs_operational": 1, 00:18:41.100 "base_bdevs_list": [ 00:18:41.100 { 00:18:41.100 "name": null, 00:18:41.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.100 "is_configured": false, 00:18:41.100 "data_offset": 0, 00:18:41.100 "data_size": 7936 00:18:41.100 }, 00:18:41.100 { 00:18:41.100 "name": "BaseBdev2", 00:18:41.100 "uuid": "27d57c02-9b46-550b-8da2-3cbb22180963", 00:18:41.100 "is_configured": true, 00:18:41.100 "data_offset": 256, 00:18:41.100 "data_size": 7936 00:18:41.100 } 00:18:41.100 ] 00:18:41.100 }' 00:18:41.100 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89012 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89012 ']' 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89012 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89012 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89012' 00:18:41.361 killing process with pid 89012 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89012 00:18:41.361 Received shutdown signal, test time was about 60.000000 seconds 00:18:41.361 00:18:41.361 Latency(us) 00:18:41.361 [2024-12-07T17:35:14.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.361 [2024-12-07T17:35:14.743Z] =================================================================================================================== 00:18:41.361 [2024-12-07T17:35:14.743Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:41.361 [2024-12-07 17:35:14.601309] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.361 17:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89012 00:18:41.361 [2024-12-07 17:35:14.601480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.361 [2024-12-07 17:35:14.601547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.361 [2024-12-07 17:35:14.601566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:41.620 [2024-12-07 17:35:14.916522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.006 17:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:43.006 00:18:43.006 real 0m17.675s 00:18:43.006 user 0m23.049s 00:18:43.006 sys 0m1.740s 00:18:43.006 17:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.006 17:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.006 ************************************ 00:18:43.006 END TEST raid_rebuild_test_sb_md_interleaved 00:18:43.006 ************************************ 00:18:43.006 17:35:16 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:43.006 17:35:16 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:43.006 17:35:16 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89012 ']' 00:18:43.006 17:35:16 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89012 00:18:43.006 17:35:16 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:43.006 00:18:43.006 real 11m57.121s 00:18:43.006 user 16m2.826s 00:18:43.006 sys 1m55.314s 00:18:43.006 17:35:16 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.006 17:35:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.006 ************************************ 00:18:43.006 END TEST bdev_raid 00:18:43.006 ************************************ 00:18:43.006 17:35:16 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:43.006 17:35:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.006 17:35:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.006 17:35:16 -- common/autotest_common.sh@10 -- # set +x 00:18:43.006 ************************************ 00:18:43.006 START TEST spdkcli_raid 00:18:43.006 ************************************ 00:18:43.006 17:35:16 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:43.006 * Looking for test storage... 00:18:43.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:43.006 17:35:16 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:43.006 17:35:16 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:43.006 17:35:16 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:43.266 17:35:16 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.266 17:35:16 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:43.266 17:35:16 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.266 17:35:16 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:43.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.267 --rc genhtml_branch_coverage=1 00:18:43.267 --rc genhtml_function_coverage=1 00:18:43.267 --rc genhtml_legend=1 00:18:43.267 --rc geninfo_all_blocks=1 00:18:43.267 --rc geninfo_unexecuted_blocks=1 00:18:43.267 00:18:43.267 ' 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:43.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.267 --rc genhtml_branch_coverage=1 00:18:43.267 --rc genhtml_function_coverage=1 00:18:43.267 --rc genhtml_legend=1 00:18:43.267 --rc geninfo_all_blocks=1 00:18:43.267 --rc geninfo_unexecuted_blocks=1 00:18:43.267 00:18:43.267 ' 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:43.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.267 --rc genhtml_branch_coverage=1 00:18:43.267 --rc genhtml_function_coverage=1 00:18:43.267 --rc genhtml_legend=1 00:18:43.267 --rc geninfo_all_blocks=1 00:18:43.267 --rc geninfo_unexecuted_blocks=1 00:18:43.267 00:18:43.267 ' 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:43.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.267 --rc genhtml_branch_coverage=1 00:18:43.267 --rc genhtml_function_coverage=1 00:18:43.267 --rc genhtml_legend=1 00:18:43.267 --rc geninfo_all_blocks=1 00:18:43.267 --rc geninfo_unexecuted_blocks=1 00:18:43.267 00:18:43.267 ' 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:43.267 17:35:16 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89693 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:43.267 17:35:16 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89693 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89693 ']' 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.267 17:35:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.267 [2024-12-07 17:35:16.590886] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:43.267 [2024-12-07 17:35:16.591028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89693 ] 00:18:43.527 [2024-12-07 17:35:16.764287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:43.527 [2024-12-07 17:35:16.900775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.527 [2024-12-07 17:35:16.900813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.469 17:35:17 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.469 17:35:17 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:44.469 17:35:17 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:44.469 17:35:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.469 17:35:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.729 17:35:17 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:44.729 17:35:17 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.729 17:35:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.729 17:35:17 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:44.729 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:44.729 ' 00:18:46.112 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:46.112 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:46.372 17:35:19 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:46.373 17:35:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.373 17:35:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.373 17:35:19 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:46.373 17:35:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.373 17:35:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.373 17:35:19 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:46.373 ' 00:18:47.313 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:47.574 17:35:20 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:47.574 17:35:20 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:47.574 17:35:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.574 17:35:20 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:47.574 17:35:20 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.574 17:35:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.574 17:35:20 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:47.574 17:35:20 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:48.144 17:35:21 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:48.144 17:35:21 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:48.144 17:35:21 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:48.144 17:35:21 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.144 17:35:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.144 17:35:21 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:48.144 17:35:21 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.144 17:35:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.144 17:35:21 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:48.144 ' 00:18:49.082 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:49.082 17:35:22 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:49.082 17:35:22 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.082 17:35:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.341 17:35:22 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:49.341 17:35:22 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.341 17:35:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.341 17:35:22 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:49.341 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:49.341 ' 00:18:50.720 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:50.720 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:50.720 17:35:23 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:50.720 17:35:23 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.720 17:35:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.720 17:35:24 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89693 00:18:50.720 17:35:24 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89693 ']' 00:18:50.720 17:35:24 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89693 00:18:50.720 17:35:24 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:50.720 17:35:24 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.720 17:35:24 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89693 00:18:50.720 17:35:24 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.720 17:35:24 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.720 killing process with pid 89693 00:18:50.720 17:35:24 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89693' 00:18:50.720 17:35:24 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89693 00:18:50.720 17:35:24 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89693 00:18:53.311 17:35:26 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:53.311 17:35:26 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89693 ']' 00:18:53.311 17:35:26 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89693 00:18:53.311 17:35:26 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89693 ']' 00:18:53.311 17:35:26 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89693 00:18:53.311 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89693) - No such process 00:18:53.311 Process with pid 89693 is not found 00:18:53.311 17:35:26 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89693 is not found' 00:18:53.311 17:35:26 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:53.311 17:35:26 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:53.311 17:35:26 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:53.311 17:35:26 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:53.311 00:18:53.311 real 0m10.336s 00:18:53.311 user 0m21.051s 00:18:53.311 sys 0m1.328s 00:18:53.311 17:35:26 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.311 17:35:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.311 ************************************ 00:18:53.311 END TEST spdkcli_raid 00:18:53.311 ************************************ 00:18:53.311 17:35:26 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:53.311 17:35:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:53.311 17:35:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.311 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:18:53.311 ************************************ 00:18:53.311 START TEST blockdev_raid5f 00:18:53.311 ************************************ 00:18:53.311 17:35:26 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:53.571 * Looking for test storage... 00:18:53.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:53.571 17:35:26 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:53.571 17:35:26 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:18:53.571 17:35:26 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:53.571 17:35:26 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:53.571 17:35:26 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.571 17:35:26 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.571 17:35:26 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.571 17:35:26 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.571 17:35:26 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.571 17:35:26 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.571 17:35:26 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.571 17:35:26 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.571 17:35:26 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.572 17:35:26 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.572 --rc genhtml_branch_coverage=1 00:18:53.572 --rc genhtml_function_coverage=1 00:18:53.572 --rc genhtml_legend=1 00:18:53.572 --rc geninfo_all_blocks=1 00:18:53.572 --rc geninfo_unexecuted_blocks=1 00:18:53.572 00:18:53.572 ' 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.572 --rc genhtml_branch_coverage=1 00:18:53.572 --rc genhtml_function_coverage=1 00:18:53.572 --rc genhtml_legend=1 00:18:53.572 --rc geninfo_all_blocks=1 00:18:53.572 --rc geninfo_unexecuted_blocks=1 00:18:53.572 00:18:53.572 ' 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.572 --rc genhtml_branch_coverage=1 00:18:53.572 --rc genhtml_function_coverage=1 00:18:53.572 --rc genhtml_legend=1 00:18:53.572 --rc geninfo_all_blocks=1 00:18:53.572 --rc geninfo_unexecuted_blocks=1 00:18:53.572 00:18:53.572 ' 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.572 --rc genhtml_branch_coverage=1 00:18:53.572 --rc genhtml_function_coverage=1 00:18:53.572 --rc genhtml_legend=1 00:18:53.572 --rc geninfo_all_blocks=1 00:18:53.572 --rc geninfo_unexecuted_blocks=1 00:18:53.572 00:18:53.572 ' 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89973 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:53.572 17:35:26 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89973 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89973 ']' 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.572 17:35:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.832 [2024-12-07 17:35:26.969456] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:53.832 [2024-12-07 17:35:26.969584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89973 ] 00:18:53.832 [2024-12-07 17:35:27.144806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.093 [2024-12-07 17:35:27.281768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.034 17:35:28 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.034 17:35:28 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:55.034 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:55.034 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:55.034 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:55.034 17:35:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.034 17:35:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.034 Malloc0 00:18:55.034 Malloc1 00:18:55.294 Malloc2 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.294 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.294 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:55.294 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.294 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.294 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.294 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:55.294 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.294 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.294 17:35:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.294 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:55.295 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5c709e14-5196-41cd-9522-b238b9871a12"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5c709e14-5196-41cd-9522-b238b9871a12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5c709e14-5196-41cd-9522-b238b9871a12",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "bd383f91-3763-49ad-8630-694920dfebe7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "8442d998-81b8-45c6-9d6e-427b3d46770d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "492be8b4-8c33-4d62-a339-47711d47856c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:55.295 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:55.295 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:55.295 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:55.295 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:55.295 17:35:28 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89973 00:18:55.295 17:35:28 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89973 ']' 00:18:55.295 17:35:28 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89973 00:18:55.295 17:35:28 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:55.295 17:35:28 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.295 17:35:28 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89973 00:18:55.295 17:35:28 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.295 17:35:28 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.295 killing process with pid 89973 00:18:55.295 17:35:28 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89973' 00:18:55.295 17:35:28 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89973 00:18:55.295 17:35:28 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89973 00:18:58.591 17:35:31 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:58.591 17:35:31 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:58.591 17:35:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:58.591 17:35:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.591 17:35:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.591 ************************************ 00:18:58.591 START TEST bdev_hello_world 00:18:58.591 ************************************ 00:18:58.591 17:35:31 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:58.591 [2024-12-07 17:35:31.536368] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:58.591 [2024-12-07 17:35:31.536493] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90046 ] 00:18:58.591 [2024-12-07 17:35:31.714180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.591 [2024-12-07 17:35:31.846214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.161 [2024-12-07 17:35:32.466982] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:59.161 [2024-12-07 17:35:32.467035] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:59.161 [2024-12-07 17:35:32.467054] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:59.161 [2024-12-07 17:35:32.467581] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:59.161 [2024-12-07 17:35:32.467749] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:59.161 [2024-12-07 17:35:32.467775] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:59.161 [2024-12-07 17:35:32.467833] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:59.161 00:18:59.161 [2024-12-07 17:35:32.467853] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:00.564 00:19:00.564 real 0m2.479s 00:19:00.564 user 0m2.000s 00:19:00.564 sys 0m0.353s 00:19:00.564 17:35:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.564 17:35:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:00.564 ************************************ 00:19:00.564 END TEST bdev_hello_world 00:19:00.564 ************************************ 00:19:00.825 17:35:33 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:00.825 17:35:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:00.825 17:35:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.825 17:35:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.825 ************************************ 00:19:00.825 START TEST bdev_bounds 00:19:00.825 ************************************ 00:19:00.825 17:35:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90089 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:00.825 Process bdevio pid: 90089 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90089' 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90089 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90089 ']' 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.825 17:35:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:00.825 [2024-12-07 17:35:34.083957] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:00.825 [2024-12-07 17:35:34.084075] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90089 ] 00:19:01.084 [2024-12-07 17:35:34.265292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:01.084 [2024-12-07 17:35:34.401806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.084 [2024-12-07 17:35:34.402004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.084 [2024-12-07 17:35:34.402041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.654 17:35:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.654 17:35:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:01.654 17:35:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:01.914 I/O targets: 00:19:01.914 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:01.914 00:19:01.914 00:19:01.914 CUnit - A unit testing framework for C - Version 2.1-3 00:19:01.914 http://cunit.sourceforge.net/ 00:19:01.914 00:19:01.914 00:19:01.914 Suite: bdevio tests on: raid5f 00:19:01.914 Test: blockdev write read block ...passed 00:19:01.914 Test: blockdev write zeroes read block ...passed 00:19:01.914 Test: blockdev write zeroes read no split ...passed 00:19:01.914 Test: blockdev write zeroes read split ...passed 00:19:02.172 Test: blockdev write zeroes read split partial ...passed 00:19:02.172 Test: blockdev reset ...passed 00:19:02.172 Test: blockdev write read 8 blocks ...passed 00:19:02.172 Test: blockdev write read size > 128k ...passed 00:19:02.172 Test: blockdev write read invalid size ...passed 00:19:02.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:02.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:02.172 Test: blockdev write read max offset ...passed 00:19:02.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:02.172 Test: blockdev writev readv 8 blocks ...passed 00:19:02.172 Test: blockdev writev readv 30 x 1block ...passed 00:19:02.172 Test: blockdev writev readv block ...passed 00:19:02.172 Test: blockdev writev readv size > 128k ...passed 00:19:02.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:02.172 Test: blockdev comparev and writev ...passed 00:19:02.172 Test: blockdev nvme passthru rw ...passed 00:19:02.172 Test: blockdev nvme passthru vendor specific ...passed 00:19:02.172 Test: blockdev nvme admin passthru ...passed 00:19:02.172 Test: blockdev copy ...passed 00:19:02.172 00:19:02.172 Run Summary: Type Total Ran Passed Failed Inactive 00:19:02.172 suites 1 1 n/a 0 0 00:19:02.172 tests 23 23 23 0 0 00:19:02.172 asserts 130 130 130 0 n/a 00:19:02.172 00:19:02.172 Elapsed time = 0.623 seconds 00:19:02.172 0 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90089 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90089 ']' 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90089 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90089 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90089' 00:19:02.172 killing process with pid 90089 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90089 00:19:02.172 17:35:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90089 00:19:03.552 17:35:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:03.552 00:19:03.552 real 0m2.881s 00:19:03.552 user 0m7.066s 00:19:03.552 sys 0m0.493s 00:19:03.552 17:35:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.552 17:35:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:03.552 ************************************ 00:19:03.552 END TEST bdev_bounds 00:19:03.552 ************************************ 00:19:03.812 17:35:36 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:03.812 17:35:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:03.812 17:35:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.812 17:35:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:03.812 ************************************ 00:19:03.812 START TEST bdev_nbd 00:19:03.812 ************************************ 00:19:03.812 17:35:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:03.812 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:03.812 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:03.812 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.812 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90153 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90153 /var/tmp/spdk-nbd.sock 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90153 ']' 00:19:03.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.813 17:35:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:03.813 [2024-12-07 17:35:37.073600] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:03.813 [2024-12-07 17:35:37.073831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.073 [2024-12-07 17:35:37.254976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.073 [2024-12-07 17:35:37.389078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:05.014 1+0 records in 00:19:05.014 1+0 records out 00:19:05.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496439 s, 8.3 MB/s 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:05.014 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:05.015 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:05.015 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.274 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:05.274 { 00:19:05.275 "nbd_device": "/dev/nbd0", 00:19:05.275 "bdev_name": "raid5f" 00:19:05.275 } 00:19:05.275 ]' 00:19:05.275 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:05.275 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:05.275 { 00:19:05.275 "nbd_device": "/dev/nbd0", 00:19:05.275 "bdev_name": "raid5f" 00:19:05.275 } 00:19:05.275 ]' 00:19:05.275 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:05.275 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:05.275 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.275 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:05.275 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:05.275 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:05.275 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.275 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.535 17:35:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.796 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:05.797 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:05.797 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:05.797 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:05.797 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:05.797 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:05.797 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.797 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:06.056 /dev/nbd0 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.056 1+0 records in 00:19:06.056 1+0 records out 00:19:06.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495745 s, 8.3 MB/s 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.056 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:06.316 { 00:19:06.316 "nbd_device": "/dev/nbd0", 00:19:06.316 "bdev_name": "raid5f" 00:19:06.316 } 00:19:06.316 ]' 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:06.316 { 00:19:06.316 "nbd_device": "/dev/nbd0", 00:19:06.316 "bdev_name": "raid5f" 00:19:06.316 } 00:19:06.316 ]' 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:06.316 256+0 records in 00:19:06.316 256+0 records out 00:19:06.316 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146207 s, 71.7 MB/s 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:06.316 256+0 records in 00:19:06.316 256+0 records out 00:19:06.316 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325708 s, 32.2 MB/s 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:06.316 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.576 17:35:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:06.836 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:07.096 malloc_lvol_verify 00:19:07.096 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:07.356 da6d11d0-6363-4ff4-b987-d67e36e6f5e0 00:19:07.356 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:07.616 3934501b-cc6c-4288-b008-6bcf110c3559 00:19:07.616 17:35:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:07.877 /dev/nbd0 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:07.877 mke2fs 1.47.0 (5-Feb-2023) 00:19:07.877 Discarding device blocks: 0/4096 done 00:19:07.877 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:07.877 00:19:07.877 Allocating group tables: 0/1 done 00:19:07.877 Writing inode tables: 0/1 done 00:19:07.877 Creating journal (1024 blocks): done 00:19:07.877 Writing superblocks and filesystem accounting information: 0/1 done 00:19:07.877 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90153 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90153 ']' 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90153 00:19:07.877 17:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:08.137 17:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.137 17:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90153 00:19:08.137 17:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:08.137 17:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:08.137 17:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90153' 00:19:08.137 killing process with pid 90153 00:19:08.137 17:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90153 00:19:08.137 17:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90153 00:19:09.520 17:35:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:09.520 00:19:09.520 real 0m5.864s 00:19:09.520 user 0m7.688s 00:19:09.520 sys 0m1.422s 00:19:09.520 17:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.520 ************************************ 00:19:09.520 END TEST bdev_nbd 00:19:09.520 17:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:09.520 ************************************ 00:19:09.520 17:35:42 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:09.520 17:35:42 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:09.520 17:35:42 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:09.520 17:35:42 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:09.520 17:35:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:09.520 17:35:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.521 17:35:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:09.521 ************************************ 00:19:09.521 START TEST bdev_fio 00:19:09.521 ************************************ 00:19:09.521 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:09.521 17:35:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:09.521 17:35:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:09.521 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:09.521 17:35:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:09.781 17:35:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:09.781 ************************************ 00:19:09.781 START TEST bdev_fio_rw_verify 00:19:09.781 ************************************ 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:09.781 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:09.782 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:09.782 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:09.782 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:09.782 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:09.782 17:35:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:10.042 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:10.042 fio-3.35 00:19:10.042 Starting 1 thread 00:19:22.265 00:19:22.265 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90357: Sat Dec 7 17:35:54 2024 00:19:22.265 read: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(467MiB/10001msec) 00:19:22.265 slat (nsec): min=17715, max=74128, avg=20059.45, stdev=2271.24 00:19:22.265 clat (usec): min=11, max=342, avg=135.96, stdev=47.57 00:19:22.265 lat (usec): min=31, max=372, avg=156.02, stdev=47.86 00:19:22.265 clat percentiles (usec): 00:19:22.265 | 50.000th=[ 137], 99.000th=[ 225], 99.900th=[ 253], 99.990th=[ 297], 00:19:22.265 | 99.999th=[ 338] 00:19:22.265 write: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(485MiB/9870msec); 0 zone resets 00:19:22.265 slat (usec): min=7, max=289, avg=16.49, stdev= 3.72 00:19:22.265 clat (usec): min=58, max=1670, avg=306.52, stdev=41.40 00:19:22.265 lat (usec): min=73, max=1959, avg=323.01, stdev=42.43 00:19:22.265 clat percentiles (usec): 00:19:22.265 | 50.000th=[ 310], 99.000th=[ 383], 99.900th=[ 603], 99.990th=[ 996], 00:19:22.265 | 99.999th=[ 1582] 00:19:22.265 bw ( KiB/s): min=46352, max=53032, per=98.80%, avg=49679.16, stdev=1594.24, samples=19 00:19:22.265 iops : min=11588, max=13258, avg=12419.79, stdev=398.56, samples=19 00:19:22.265 lat (usec) : 20=0.01%, 50=0.01%, 100=13.59%, 250=39.99%, 500=46.35% 00:19:22.265 lat (usec) : 750=0.05%, 1000=0.02% 00:19:22.265 lat (msec) : 2=0.01% 00:19:22.265 cpu : usr=98.85%, sys=0.48%, ctx=18, majf=0, minf=9855 00:19:22.265 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.265 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.265 issued rwts: total=119653,124069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.265 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:22.265 00:19:22.265 Run status group 0 (all jobs): 00:19:22.265 READ: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=467MiB (490MB), run=10001-10001msec 00:19:22.265 WRITE: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=485MiB (508MB), run=9870-9870msec 00:19:22.835 ----------------------------------------------------- 00:19:22.836 Suppressions used: 00:19:22.836 count bytes template 00:19:22.836 1 7 /usr/src/fio/parse.c 00:19:22.836 696 66816 /usr/src/fio/iolog.c 00:19:22.836 1 8 libtcmalloc_minimal.so 00:19:22.836 1 904 libcrypto.so 00:19:22.836 ----------------------------------------------------- 00:19:22.836 00:19:22.836 00:19:22.836 real 0m12.966s 00:19:22.836 user 0m13.289s 00:19:22.836 sys 0m0.771s 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:22.836 ************************************ 00:19:22.836 END TEST bdev_fio_rw_verify 00:19:22.836 ************************************ 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5c709e14-5196-41cd-9522-b238b9871a12"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5c709e14-5196-41cd-9522-b238b9871a12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5c709e14-5196-41cd-9522-b238b9871a12",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "bd383f91-3763-49ad-8630-694920dfebe7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "8442d998-81b8-45c6-9d6e-427b3d46770d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "492be8b4-8c33-4d62-a339-47711d47856c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:22.836 /home/vagrant/spdk_repo/spdk 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:22.836 00:19:22.836 real 0m13.268s 00:19:22.836 user 0m13.408s 00:19:22.836 sys 0m0.922s 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.836 17:35:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:22.836 ************************************ 00:19:22.836 END TEST bdev_fio 00:19:22.836 ************************************ 00:19:23.097 17:35:56 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:23.097 17:35:56 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:23.097 17:35:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:23.097 17:35:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.097 17:35:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.097 ************************************ 00:19:23.097 START TEST bdev_verify 00:19:23.097 ************************************ 00:19:23.097 17:35:56 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:23.097 [2024-12-07 17:35:56.329397] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:23.097 [2024-12-07 17:35:56.329510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90526 ] 00:19:23.357 [2024-12-07 17:35:56.502375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:23.357 [2024-12-07 17:35:56.639731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.357 [2024-12-07 17:35:56.639749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.928 Running I/O for 5 seconds... 00:19:26.251 10392.00 IOPS, 40.59 MiB/s [2024-12-07T17:36:00.574Z] 10503.00 IOPS, 41.03 MiB/s [2024-12-07T17:36:01.514Z] 10507.67 IOPS, 41.05 MiB/s [2024-12-07T17:36:02.466Z] 10508.00 IOPS, 41.05 MiB/s [2024-12-07T17:36:02.466Z] 10527.60 IOPS, 41.12 MiB/s 00:19:29.084 Latency(us) 00:19:29.084 [2024-12-07T17:36:02.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.084 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:29.084 Verification LBA range: start 0x0 length 0x2000 00:19:29.084 raid5f : 5.02 6417.02 25.07 0.00 0.00 30076.46 105.08 21177.57 00:19:29.084 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:29.084 Verification LBA range: start 0x2000 length 0x2000 00:19:29.084 raid5f : 5.02 4102.52 16.03 0.00 0.00 46990.13 277.24 33426.22 00:19:29.084 [2024-12-07T17:36:02.466Z] =================================================================================================================== 00:19:29.084 [2024-12-07T17:36:02.466Z] Total : 10519.54 41.09 0.00 0.00 36670.00 105.08 33426.22 00:19:30.466 00:19:30.466 real 0m7.469s 00:19:30.466 user 0m13.726s 00:19:30.466 sys 0m0.365s 00:19:30.466 17:36:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.466 17:36:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:30.466 ************************************ 00:19:30.466 END TEST bdev_verify 00:19:30.466 ************************************ 00:19:30.466 17:36:03 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:30.466 17:36:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:30.466 17:36:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.466 17:36:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:30.466 ************************************ 00:19:30.466 START TEST bdev_verify_big_io 00:19:30.466 ************************************ 00:19:30.466 17:36:03 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:30.726 [2024-12-07 17:36:03.876649] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:30.726 [2024-12-07 17:36:03.876789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90619 ] 00:19:30.726 [2024-12-07 17:36:04.055212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:30.986 [2024-12-07 17:36:04.183317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.986 [2024-12-07 17:36:04.183348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.555 Running I/O for 5 seconds... 00:19:33.872 633.00 IOPS, 39.56 MiB/s [2024-12-07T17:36:08.195Z] 728.50 IOPS, 45.53 MiB/s [2024-12-07T17:36:09.135Z] 739.67 IOPS, 46.23 MiB/s [2024-12-07T17:36:10.083Z] 745.25 IOPS, 46.58 MiB/s [2024-12-07T17:36:10.083Z] 761.60 IOPS, 47.60 MiB/s 00:19:36.701 Latency(us) 00:19:36.701 [2024-12-07T17:36:10.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.702 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:36.702 Verification LBA range: start 0x0 length 0x200 00:19:36.702 raid5f : 5.20 439.50 27.47 0.00 0.00 7311323.43 178.86 322356.99 00:19:36.702 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:36.702 Verification LBA range: start 0x200 length 0x200 00:19:36.702 raid5f : 5.28 336.59 21.04 0.00 0.00 9463567.61 190.49 415767.25 00:19:36.702 [2024-12-07T17:36:10.084Z] =================================================================================================================== 00:19:36.702 [2024-12-07T17:36:10.084Z] Total : 776.09 48.51 0.00 0.00 8252930.26 178.86 415767.25 00:19:38.614 00:19:38.614 real 0m7.761s 00:19:38.614 user 0m14.304s 00:19:38.614 sys 0m0.368s 00:19:38.614 17:36:11 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.614 17:36:11 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.614 ************************************ 00:19:38.614 END TEST bdev_verify_big_io 00:19:38.614 ************************************ 00:19:38.614 17:36:11 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:38.614 17:36:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:38.614 17:36:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.614 17:36:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:38.614 ************************************ 00:19:38.614 START TEST bdev_write_zeroes 00:19:38.614 ************************************ 00:19:38.614 17:36:11 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:38.614 [2024-12-07 17:36:11.710865] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:38.614 [2024-12-07 17:36:11.710996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90723 ] 00:19:38.614 [2024-12-07 17:36:11.890103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.874 [2024-12-07 17:36:12.023570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.441 Running I/O for 1 seconds... 00:19:40.406 29703.00 IOPS, 116.03 MiB/s 00:19:40.406 Latency(us) 00:19:40.406 [2024-12-07T17:36:13.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.406 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:40.406 raid5f : 1.01 29674.93 115.92 0.00 0.00 4300.37 1538.24 6038.47 00:19:40.406 [2024-12-07T17:36:13.788Z] =================================================================================================================== 00:19:40.406 [2024-12-07T17:36:13.788Z] Total : 29674.93 115.92 0.00 0.00 4300.37 1538.24 6038.47 00:19:41.832 00:19:41.832 real 0m3.479s 00:19:41.832 user 0m2.985s 00:19:41.832 sys 0m0.363s 00:19:41.832 17:36:15 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.832 17:36:15 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:41.832 ************************************ 00:19:41.832 END TEST bdev_write_zeroes 00:19:41.832 ************************************ 00:19:41.832 17:36:15 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:41.832 17:36:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:41.832 17:36:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.832 17:36:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:41.832 ************************************ 00:19:41.832 START TEST bdev_json_nonenclosed 00:19:41.832 ************************************ 00:19:41.832 17:36:15 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:42.091 [2024-12-07 17:36:15.256956] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:42.091 [2024-12-07 17:36:15.257065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90776 ] 00:19:42.091 [2024-12-07 17:36:15.429855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.350 [2024-12-07 17:36:15.561607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.350 [2024-12-07 17:36:15.561708] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:42.350 [2024-12-07 17:36:15.561738] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:42.350 [2024-12-07 17:36:15.561748] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:42.609 00:19:42.609 real 0m0.648s 00:19:42.609 user 0m0.408s 00:19:42.609 sys 0m0.135s 00:19:42.609 17:36:15 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.609 17:36:15 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:42.609 ************************************ 00:19:42.609 END TEST bdev_json_nonenclosed 00:19:42.609 ************************************ 00:19:42.609 17:36:15 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:42.609 17:36:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:42.609 17:36:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.609 17:36:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:42.609 ************************************ 00:19:42.609 START TEST bdev_json_nonarray 00:19:42.609 ************************************ 00:19:42.609 17:36:15 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:42.609 [2024-12-07 17:36:15.976090] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:42.609 [2024-12-07 17:36:15.976198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90807 ] 00:19:42.868 [2024-12-07 17:36:16.148448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.127 [2024-12-07 17:36:16.283894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.127 [2024-12-07 17:36:16.284023] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:43.127 [2024-12-07 17:36:16.284043] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:43.127 [2024-12-07 17:36:16.284064] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:43.388 00:19:43.388 real 0m0.646s 00:19:43.388 user 0m0.392s 00:19:43.388 sys 0m0.150s 00:19:43.388 17:36:16 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.388 17:36:16 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:43.388 ************************************ 00:19:43.388 END TEST bdev_json_nonarray 00:19:43.388 ************************************ 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:43.388 17:36:16 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:43.388 00:19:43.388 real 0m49.963s 00:19:43.388 user 1m6.647s 00:19:43.388 sys 0m5.845s 00:19:43.388 17:36:16 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.388 17:36:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.388 ************************************ 00:19:43.388 END TEST blockdev_raid5f 00:19:43.388 ************************************ 00:19:43.388 17:36:16 -- spdk/autotest.sh@194 -- # uname -s 00:19:43.388 17:36:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:43.388 17:36:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:43.388 17:36:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:43.388 17:36:16 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:43.388 17:36:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.388 17:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:43.388 17:36:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:43.388 17:36:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:43.388 17:36:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:43.388 17:36:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:43.388 17:36:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:43.388 17:36:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:43.388 17:36:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:43.388 17:36:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.388 17:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:43.388 17:36:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:43.388 17:36:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:43.388 17:36:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:43.388 17:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:45.932 INFO: APP EXITING 00:19:45.932 INFO: killing all VMs 00:19:45.932 INFO: killing vhost app 00:19:45.932 INFO: EXIT DONE 00:19:46.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.192 Waiting for block devices as requested 00:19:46.453 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:46.453 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:47.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:47.393 Cleaning 00:19:47.393 Removing: /var/run/dpdk/spdk0/config 00:19:47.393 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:47.393 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:47.393 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:47.393 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:47.393 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:47.393 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:47.393 Removing: /dev/shm/spdk_tgt_trace.pid56931 00:19:47.393 Removing: /var/run/dpdk/spdk0 00:19:47.393 Removing: /var/run/dpdk/spdk_pid56690 00:19:47.393 Removing: /var/run/dpdk/spdk_pid56931 00:19:47.393 Removing: /var/run/dpdk/spdk_pid57160 00:19:47.393 Removing: /var/run/dpdk/spdk_pid57275 00:19:47.393 Removing: /var/run/dpdk/spdk_pid57326 00:19:47.393 Removing: /var/run/dpdk/spdk_pid57459 00:19:47.393 Removing: /var/run/dpdk/spdk_pid57477 00:19:47.393 Removing: /var/run/dpdk/spdk_pid57687 00:19:47.393 Removing: /var/run/dpdk/spdk_pid57800 00:19:47.393 Removing: /var/run/dpdk/spdk_pid57906 00:19:47.393 Removing: /var/run/dpdk/spdk_pid58028 00:19:47.393 Removing: /var/run/dpdk/spdk_pid58136 00:19:47.393 Removing: /var/run/dpdk/spdk_pid58181 00:19:47.393 Removing: /var/run/dpdk/spdk_pid58212 00:19:47.393 Removing: /var/run/dpdk/spdk_pid58288 00:19:47.393 Removing: /var/run/dpdk/spdk_pid58416 00:19:47.393 Removing: /var/run/dpdk/spdk_pid58863 00:19:47.393 Removing: /var/run/dpdk/spdk_pid58940 00:19:47.393 Removing: /var/run/dpdk/spdk_pid59009 00:19:47.393 Removing: /var/run/dpdk/spdk_pid59030 00:19:47.393 Removing: /var/run/dpdk/spdk_pid59174 00:19:47.393 Removing: /var/run/dpdk/spdk_pid59195 00:19:47.393 Removing: /var/run/dpdk/spdk_pid59346 00:19:47.653 Removing: /var/run/dpdk/spdk_pid59362 00:19:47.653 Removing: /var/run/dpdk/spdk_pid59437 00:19:47.653 Removing: /var/run/dpdk/spdk_pid59455 00:19:47.653 Removing: /var/run/dpdk/spdk_pid59524 00:19:47.653 Removing: /var/run/dpdk/spdk_pid59548 00:19:47.653 Removing: /var/run/dpdk/spdk_pid59754 00:19:47.653 Removing: /var/run/dpdk/spdk_pid59785 00:19:47.653 Removing: /var/run/dpdk/spdk_pid59874 00:19:47.654 Removing: /var/run/dpdk/spdk_pid61236 00:19:47.654 Removing: /var/run/dpdk/spdk_pid61442 00:19:47.654 Removing: /var/run/dpdk/spdk_pid61593 00:19:47.654 Removing: /var/run/dpdk/spdk_pid62242 00:19:47.654 Removing: /var/run/dpdk/spdk_pid62448 00:19:47.654 Removing: /var/run/dpdk/spdk_pid62588 00:19:47.654 Removing: /var/run/dpdk/spdk_pid63231 00:19:47.654 Removing: /var/run/dpdk/spdk_pid63556 00:19:47.654 Removing: /var/run/dpdk/spdk_pid63707 00:19:47.654 Removing: /var/run/dpdk/spdk_pid65091 00:19:47.654 Removing: /var/run/dpdk/spdk_pid65340 00:19:47.654 Removing: /var/run/dpdk/spdk_pid65485 00:19:47.654 Removing: /var/run/dpdk/spdk_pid66867 00:19:47.654 Removing: /var/run/dpdk/spdk_pid67121 00:19:47.654 Removing: /var/run/dpdk/spdk_pid67267 00:19:47.654 Removing: /var/run/dpdk/spdk_pid68652 00:19:47.654 Removing: /var/run/dpdk/spdk_pid69109 00:19:47.654 Removing: /var/run/dpdk/spdk_pid69250 00:19:47.654 Removing: /var/run/dpdk/spdk_pid70739 00:19:47.654 Removing: /var/run/dpdk/spdk_pid71003 00:19:47.654 Removing: /var/run/dpdk/spdk_pid71151 00:19:47.654 Removing: /var/run/dpdk/spdk_pid72641 00:19:47.654 Removing: /var/run/dpdk/spdk_pid72906 00:19:47.654 Removing: /var/run/dpdk/spdk_pid73054 00:19:47.654 Removing: /var/run/dpdk/spdk_pid74545 00:19:47.654 Removing: /var/run/dpdk/spdk_pid75036 00:19:47.654 Removing: /var/run/dpdk/spdk_pid75183 00:19:47.654 Removing: /var/run/dpdk/spdk_pid75327 00:19:47.654 Removing: /var/run/dpdk/spdk_pid75745 00:19:47.654 Removing: /var/run/dpdk/spdk_pid76482 00:19:47.654 Removing: /var/run/dpdk/spdk_pid76858 00:19:47.654 Removing: /var/run/dpdk/spdk_pid77542 00:19:47.654 Removing: /var/run/dpdk/spdk_pid78001 00:19:47.654 Removing: /var/run/dpdk/spdk_pid78751 00:19:47.654 Removing: /var/run/dpdk/spdk_pid79156 00:19:47.654 Removing: /var/run/dpdk/spdk_pid81126 00:19:47.654 Removing: /var/run/dpdk/spdk_pid81570 00:19:47.654 Removing: /var/run/dpdk/spdk_pid82010 00:19:47.654 Removing: /var/run/dpdk/spdk_pid84096 00:19:47.654 Removing: /var/run/dpdk/spdk_pid84588 00:19:47.654 Removing: /var/run/dpdk/spdk_pid85104 00:19:47.654 Removing: /var/run/dpdk/spdk_pid86161 00:19:47.654 Removing: /var/run/dpdk/spdk_pid86486 00:19:47.654 Removing: /var/run/dpdk/spdk_pid87426 00:19:47.654 Removing: /var/run/dpdk/spdk_pid87750 00:19:47.914 Removing: /var/run/dpdk/spdk_pid88688 00:19:47.914 Removing: /var/run/dpdk/spdk_pid89012 00:19:47.914 Removing: /var/run/dpdk/spdk_pid89693 00:19:47.914 Removing: /var/run/dpdk/spdk_pid89973 00:19:47.914 Removing: /var/run/dpdk/spdk_pid90046 00:19:47.914 Removing: /var/run/dpdk/spdk_pid90089 00:19:47.914 Removing: /var/run/dpdk/spdk_pid90342 00:19:47.914 Removing: /var/run/dpdk/spdk_pid90526 00:19:47.914 Removing: /var/run/dpdk/spdk_pid90619 00:19:47.914 Removing: /var/run/dpdk/spdk_pid90723 00:19:47.914 Removing: /var/run/dpdk/spdk_pid90776 00:19:47.914 Removing: /var/run/dpdk/spdk_pid90807 00:19:47.914 Clean 00:19:47.914 17:36:21 -- common/autotest_common.sh@1453 -- # return 0 00:19:47.914 17:36:21 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:47.914 17:36:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.914 17:36:21 -- common/autotest_common.sh@10 -- # set +x 00:19:47.914 17:36:21 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:47.914 17:36:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.914 17:36:21 -- common/autotest_common.sh@10 -- # set +x 00:19:47.914 17:36:21 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:47.914 17:36:21 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:47.914 17:36:21 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:48.174 17:36:21 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:48.174 17:36:21 -- spdk/autotest.sh@398 -- # hostname 00:19:48.174 17:36:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:48.174 geninfo: WARNING: invalid characters removed from testname! 00:20:10.118 17:36:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:12.024 17:36:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:14.556 17:36:47 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:16.462 17:36:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:18.997 17:36:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:20.902 17:36:54 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:22.812 17:36:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:23.072 17:36:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:23.072 17:36:56 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:23.072 17:36:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:23.072 17:36:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:23.072 17:36:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:23.072 + [[ -n 5428 ]] 00:20:23.072 + sudo kill 5428 00:20:23.081 [Pipeline] } 00:20:23.096 [Pipeline] // timeout 00:20:23.101 [Pipeline] } 00:20:23.116 [Pipeline] // stage 00:20:23.120 [Pipeline] } 00:20:23.134 [Pipeline] // catchError 00:20:23.142 [Pipeline] stage 00:20:23.145 [Pipeline] { (Stop VM) 00:20:23.156 [Pipeline] sh 00:20:23.440 + vagrant halt 00:20:25.979 ==> default: Halting domain... 00:20:34.199 [Pipeline] sh 00:20:34.484 + vagrant destroy -f 00:20:37.020 ==> default: Removing domain... 00:20:37.033 [Pipeline] sh 00:20:37.317 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:37.328 [Pipeline] } 00:20:37.348 [Pipeline] // stage 00:20:37.356 [Pipeline] } 00:20:37.372 [Pipeline] // dir 00:20:37.380 [Pipeline] } 00:20:37.398 [Pipeline] // wrap 00:20:37.407 [Pipeline] } 00:20:37.424 [Pipeline] // catchError 00:20:37.437 [Pipeline] stage 00:20:37.441 [Pipeline] { (Epilogue) 00:20:37.459 [Pipeline] sh 00:20:37.748 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:41.971 [Pipeline] catchError 00:20:41.972 [Pipeline] { 00:20:41.984 [Pipeline] sh 00:20:42.267 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:42.267 Artifacts sizes are good 00:20:42.276 [Pipeline] } 00:20:42.287 [Pipeline] // catchError 00:20:42.295 [Pipeline] archiveArtifacts 00:20:42.302 Archiving artifacts 00:20:42.403 [Pipeline] cleanWs 00:20:42.415 [WS-CLEANUP] Deleting project workspace... 00:20:42.415 [WS-CLEANUP] Deferred wipeout is used... 00:20:42.421 [WS-CLEANUP] done 00:20:42.423 [Pipeline] } 00:20:42.435 [Pipeline] // stage 00:20:42.440 [Pipeline] } 00:20:42.454 [Pipeline] // node 00:20:42.460 [Pipeline] End of Pipeline 00:20:42.494 Finished: SUCCESS